



下載本文檔
版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)
文檔簡介
圖9所示:圖2.SEQ圖2.\*ARABIC8LSTM模型圖Fig2.SEQFig2.\*ARABIC8StructurediagramofRNNmodelLSTM的計算公式如下:(SEQ____________\*ARABIC2.13)(SEQ____________\*ARABIC2.14)(SEQ____________\*ARABIC2.15)(SEQ____________\*ARABIC2.16)(SEQ____________\*ARABIC2.17)(SEQ____________\*ARABIC2.18)下面本文詳細介紹LSTM增加的三個門控單元:(1).遺忘門LSTM中的第一步是決定從細胞狀態(tài)中丟棄什么信息。這個決定通過一個稱為忘記門的門控單元完成。該門會讀取?t?1和xt,輸出一個在0到1之間的數(shù)值給每個在細胞狀態(tài)(SEQ____________\*ARABIC2.19)其中?t?1表示的是上一個神經(jīng)元的輸出,xt(2).輸入門下一步是決定讓多少新的信息加入到神經(jīng)元的狀態(tài)中來。實現(xiàn)這個需要包括兩個步驟:首先,一個叫做“輸入門控單元”的sigmoid層決定哪些信息需要更新;一個tanh層生成一個向量,也就是備選的用來更新的內(nèi)容Ct(SEQ____________\*ARABIC2.20)(SEQ____________\*ARABIC2.21)(3).輸出門如下圖所示,輸出門根據(jù)細胞狀態(tài)決定輸出的值。sigmoid層來確定細胞狀態(tài)的哪個部分將輸出出去,接著細胞狀態(tài)通過tanh函數(shù)處理,得到一個(-1,1)之間的值,并將它和sigmoid門的輸出相乘,得到神經(jīng)網(wǎng)絡(luò)隱藏層的輸出結(jié)果。(SEQ____________\*ARABIC2.22)(SEQ____________\*ARABIC2.23)1.3GRU模型由于標準的RNN網(wǎng)絡(luò)在訓練的過程中容易出現(xiàn)梯度消失和梯度爆炸等問題,導致標準的RNN模型缺乏捕獲用戶簽到的長序列的依賴性。為了彌補RNN的缺陷,一些改進的RNN網(wǎng)絡(luò)被提出,如LSTMADDINCSL_CITATION{"citationItems":[{"id":"ITEM-1","itemData":{"DOI":"10.1162/neco.19735","ISSN":"08997667","PMID":"9377276","abstract":"Learningtostoreinformationoverextendedtimeintervalsbyrecurrentbackpropagationtakesaverylongtime,mostlybecauseofinsufficient,decayingerrorbackflow.WebrieflyreviewHochreiter's(1991)analysisofthisproblem,thenaddressitbyintroducinganovel,efficient,gradient-basedmethodcalledlongshort-termmemory(LSTM).Truncatingthegradientwherethisdoesnotdoharm,LSTMcanlearntobridgeminimaltimelagsinexcessof1000discrete-timestepsbyenforcingconstanterrorflowthroughconstanterrorcarouselswithinspecialunits.Multiplicativegateunitslearntoopenandcloseaccesstotheconstanterrorflow.LSTMislocalinspaceandtime;itscomputationalcomplexitypertimestepandweightisO(1).Ourexperimentswithartificialdatainvolvelocal,distributed,real-valued,andnoisypatternrepresentations.Incomparisonswithreal-timerecurrentlearning,backpropagationthroughtime,recurrentcascadecorrelation,Elmannets,andneuralsequencechunking,LSTMleadstomanymoresuccessfulruns,andlearnsmuchfaster.LSTMalsosolvescomplex,artificiallong-time-lagtasksthathaveneverbeensolvedbypreviousrecurrentnetworkalgorithms.","author":[{"dropping-particle":"","family":"Hochreiter","given":"Sepp","non-dropping-particle":"","parse-names":false,"suffix":""},{"dropping-particle":"","family":"Schmidhuber","given":"Jürgen","non-dropping-particle":"","parse-names":false,"suffix":""}],"container-title":"NeuralComputation","id":"ITEM-1","issue":"8","issued":{"date-parts":[["1997"]]},"page":"1735-1780","title":"LongShort-TermMemory","type":"article-journal","volume":"9"},"uris":["/documents/?uuid=8dadfe80-bc95-4c9b-ae91-0b2ec69b7bf2"]}],"mendeley":{"formattedCitation":"<sup>[48]</sup>","plainTextFormattedCitation":"[48]","previouslyFormattedCitation":"<sup>[47]</sup>"},"properties":{"noteIndex":0},"schema":"/citation-style-language/schema/raw/master/csl-citation.json"}[48]和GRUADDINCSL_CITATION{"citationItems":[{"id":"ITEM-1","itemData":{"ISBN":"9781510810587","abstract":"Inthiswork,weproposeanovelrecurrentneuralnetwork(RNN)architecture.TheproposedRNN,gated-feedbackRNN(GF-RNN),extendstheexistingapproachofstackingmultiplerecurrentlayersbyallowingandcontrollingsignalsflowingfromupperrecurrentlayerstolowerlayersusingaglobalgatingunitforeachpairoflayers.Therecurrentsignalsexchangedbetweenlayersaregatedadaptivelybasedontheprevioushiddenstatesandthecurrentinput.WeevaluatedtheproposedGF-RNNwithdifferenttypesofrecurrentunits,suchastanh,longshort-termmemoryandgatedrecurrentunits,onthetasksofcharacter-levellanguagemodelingandPythonprogramevaluation.OurempiricalevaluationofdifferentRNNunits,revealedthatinbothtasks,theGF-RNNoutperformstheconventionalapproachestobuilddeepstackedRNNs.WesuggestthattheimprovementarisesbecausetheGF-RNNcanadaptivelyassigndifferentlayerstodifferenttimescalesandlayer-to-layerinteractions(includingthetop-downoneswhicharenotusuallypresentinastackedRNN)bylearningtogatetheseinteractions.","author":[{"dropping-particle":"","family":"Chung","given":"Junyoung","non-dropping-particle":"","parse-names":false,"suffix":""},{"dropping-particle":"","family":"Gulcehre","given":"Caglar","non-dropping-particle":"","parse-names":false,"suffix":""},{"dropping-particle":"","family":"Cho","given":"Kyunghyun","non-dropping-particle":"","parse-names":false,"suffix":""},{"dropping-particle":"","family":"Bengio","given":"Yoshua","non-dropping-particle":"","parse-names":false,"suffix":""}],"container-title":"32ndInternationalConferenceonMachineLearning,ICML2015","id":"ITEM-1","issued":{"date-parts":[["2015"]]},"note":"GRU","page":"2067-2075","title":"Gatedfeedbackrecurrentneuralnetworks","type":"paper-conference","volume":"3"},"uris":["/documents/?uuid=1ef2b1ec-8119-4445-815a-6071c616e576"]}],"mendeley":{"formattedCitation":"<sup>[49]</sup>","plainTextFormattedCitation":"[49]","previouslyFormattedCitation":"<sup>[48]</sup>"},"properties":{"noteIndex":0},"schema":"/citation-style-language/schema/raw/ma
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
- 6. 下載文件中如有侵權(quán)或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 服裝定制及加工承攬協(xié)議
- 網(wǎng)絡(luò)購物平臺交易服務(wù)協(xié)議和用戶使用協(xié)議規(guī)定
- 行政管理市政學考試準備指南試題及答案
- 2024-2025年高中數(shù)學第二章隨機變量及其分布課時跟蹤訓練15離散型隨機變量的方差新人教A版選修2-3
- 新型建材應用試題及答案
- 行政管理的監(jiān)控體系試題及答案
- 2025二手房地產(chǎn)市場交易合同范本
- 2025出租房屋委托合同模板
- 2025年北京市存量房屋交易合同
- 行政管理中的道德決策與社會影響分析試題及答案
- ESD標準培訓資料ppt課件
- 河南省確山縣三里河治理工程
- 水利工程合同工程完工驗收工程建設(shè)管理工作報告
- 基于PLC的溫室大棚控制系統(tǒng)設(shè)計說明
- 多級泵檢修及維護(1)
- 涵洞孔徑計算
- 測量未知電阻的方法
- 中國民主同盟入盟申請表
- 觀感質(zhì)量檢查表
- 最全半導體能帶分布圖
- 企業(yè)信息登記表
評論
0/150
提交評論