000 | 03586nam a22004935i 4500 | ||
---|---|---|---|
999 |
_c200458024 _d76236 |
||
003 | TR-AnTOB | ||
005 | 20231124085953.0 | ||
007 | cr nn 008mamaa | ||
008 | 220103s2022 sz | s |||| 0|eng d | ||
020 | _a9783030899295 | ||
024 | 7 |
_a10.1007/978-3-030-89929-5 _2doi |
|
040 |
_aTR-AnTOB _beng _erda _cTR-AnTOB |
||
041 | _aeng | ||
050 | 4 | _aTK7867 | |
072 | 7 |
_aTJFC _2bicssc |
|
072 | 7 |
_aTEC008010 _2bisacsh |
|
072 | 7 |
_aTJFC _2thema |
|
090 | _aTK7867EBK | ||
100 | 1 |
_aSalem, Fathi M. _eauthor. _4aut _4http://id.loc.gov/vocabulary/relators/aut |
|
245 | 1 | 0 |
_aRecurrent Neural Networks _h[electronic resource] : _bFrom Simple to Gated Architectures / _cby Fathi M. Salem. |
250 | _a1st ed. 2022. | ||
264 | 1 |
_aCham : _bSpringer International Publishing : _bImprint: Springer, _c2022. |
|
300 | _a1 online resource | ||
336 |
_atext _btxt _2rdacontent |
||
337 |
_acomputer _bc _2rdamedia |
||
338 |
_aonline resource _bcr _2rdacarrier |
||
347 |
_atext file _bPDF _2rda |
||
505 | 0 | _aIntroduction -- 1. Network Architectures -- 2. Learning Processes -- 3. Recurrent Neural Networks (RNN) -- 4. Gated RNN: The Long Short-Term Memory (LSTM) RNN -- 5. Gated RNN: The Gated Recurrent Unit (GRU) RNN -- 6. Gated RNN: The Minimal Gated Unit (MGU) RNN. | |
520 | _aThis textbook provides a compact but comprehensive treatment that provides analytical and design steps to recurrent neural networks from scratch. It provides a treatment of the general recurrent neural networks with principled methods for training that render the (generalized) backpropagation through time (BPTT). This author focuses on the basics and nuances of recurrent neural networks, providing technical and principled treatment of the subject, with a view toward using coding and deep learning computational frameworks, e.g., Python and Tensorflow-Keras. Recurrent neural networks are treated holistically from simple to gated architectures, adopting the technical machinery of adaptive non-convex optimization with dynamic constraints to leverage its systematic power in organizing the learning and training processes. This permits the flow of concepts and techniques that provide grounded support for design and training choices. The author’s approach enables strategic co-training of output layers, using supervised learning, and hidden layers, using unsupervised learning, to generate more efficient internal representations and accuracy performance. As a result, readers will be enabled to create designs tailoring proficient procedures for recurrent neural networks in their targeted applications. Explains the intricacy and diversity of recurrent networks from simple to more complex gated recurrent neural networks; Discusses the design framing of such networks, and how to redesign simple RNN to avoid unstable behavior; Describes the forms of training of RNNs framed in adaptive non-convex optimization with dynamics constraints. | ||
650 | 0 | _aElectronic circuits. | |
650 | 0 | _aSignal processing. | |
650 | 0 | _aData mining. | |
650 | 1 | 4 | _aElectronic Circuits and Systems. |
650 | 2 | 4 | _aSignal, Speech and Image Processing . |
650 | 2 | 4 | _aData Mining and Knowledge Discovery. |
653 | _aNeural networks (Computer science) | ||
710 | 2 | _aSpringerLink (Online service) | |
856 | 4 | 0 |
_uhttps://doi.org/10.1007/978-3-030-89929-5 _3Springer eBooks _zOnline access link to the resource |
942 |
_2lcc _cEBK |