long-termDiscount factor[edit]The discount factor γ determines the imp的繁體中文翻譯

long-termDiscount factor[edit]The d

long-term

Discount factor[edit]
The discount factor γ determines the importance of future rewards. A factor of 0 will make the agent "myopic" (or short-sighted) by only considering current rewards, while a factor approaching 1 will make it strive for a long-term high reward. If the discount factor meets or exceeds 1, the action values may diverge. For γ = 1, without a terminal state, or if the agent never reaches one, all environment histories will be infinitely long, and utilities with additive, undiscounted rewards will generally be infinite.[2]

Initial conditions (Q_{0})[edit]
Since Q-learning is an iterative algorithm, it implicitly assumes an initial condition before the first update occurs. A high initial value, also known as "optimistic initial conditions",[3] can encourage exploration: no matter what action will take place, the update rule will cause it to have lower values than the other alternative, thus increasing their choice probability. Recently, it was suggested that the first reward r could be used to reset the initial conditions[citation needed]. According to this idea, the first time an action is taken the reward is used to set the value of Q. This will allow immediate learning in case of fixed deterministic rewards. Surprisingly, this resetting-of-initial-conditions (RIC) approach seems to be consistent with human behaviour in repeated binary choice experiments.[4]
0/5000
原始語言: -
目標語言: -
結果 (繁體中文) 1: [復制]
復制成功!
long-termDiscount factor[edit]The discount factor γ determines the importance of future rewards. A factor of 0 will make the agent "myopic" (or short-sighted) by only considering current rewards, while a factor approaching 1 will make it strive for a long-term high reward. If the discount factor meets or exceeds 1, the action values may diverge. For γ = 1, without a terminal state, or if the agent never reaches one, all environment histories will be infinitely long, and utilities with additive, undiscounted rewards will generally be infinite.[2]Initial conditions (Q_{0})[edit]Since Q-learning is an iterative algorithm, it implicitly assumes an initial condition before the first update occurs. A high initial value, also known as "optimistic initial conditions",[3] can encourage exploration: no matter what action will take place, the update rule will cause it to have lower values than the other alternative, thus increasing their choice probability. Recently, it was suggested that the first reward r could be used to reset the initial conditions[citation needed]. According to this idea, the first time an action is taken the reward is used to set the value of Q. This will allow immediate learning in case of fixed deterministic rewards. Surprisingly, this resetting-of-initial-conditions (RIC) approach seems to be consistent with human behaviour in repeated binary choice experiments.[4]
正在翻譯中..
結果 (繁體中文) 2:[復制]
復制成功!
長期折現因子[編輯] 的折扣係數γ決定未來回報的重要性。0因素將使代理“近視”(或短視)只考慮當前的獎勵,而接近1的一個因素將使爭取一個長期的高回報。如果折扣係數達到或超過1,動作值可能會發散。對於γ= 1,不使用終端狀態,或者如果代理從未到達一個,全部環境的歷史將是無限長,和公用事業用添加劑,未貼現獎勵通常將是無限的。[2] 的初始條件(Q_ {0})[編輯] 由於Q學習是一個迭代算法,它隱含地假定初始條件的第一個更新發生之前。一個高的初始值,也被稱為“樂觀初始條件”,[3]可以鼓勵勘探:無論什麼行動將發生時,更新規則將使得它有較低的值比其它替代方案中,從而增加了他們的選擇概率。最近,有人建議,第一個獎勵- [R可以用來重置初始條件[來源請求]。按照這個思路,第一時間採取行動的報酬是用來設置值Q的這將使近期的學習的情況下確定的固定回報。出人意料的是,這種復位,即用初始條件(RIC)的做法似乎是在重複的二元選擇實驗,人類行為是一致的。[4]





正在翻譯中..
結果 (繁體中文) 3:[復制]
復制成功!
長期的、貼現因數[編輯]的折扣因數,决定了未來回報的重要性。0個因素將使代理“短視”(或短視)只考慮現時的回報,而一個因素接近1,將使它爭取長期高回報。如果貼現因數達到或超過1,其作用值可能會出現偏離。對於1,沒有一個終端狀態,如果代理人從未達到,所有環境史將無限長,和公用事業附加、未貼現回報通常是無限的。[ 2 ]

初始條件(q_ { 0 })[編輯]
自Q-學習算灋是一種反覆運算算灋,它隱含的假設在第一次更新時的初始條件。一個很高的初始值,也被稱為“樂觀的初始條件”,[ 3 ]可以鼓勵探索:無論什麼行動都會發生,更新規則會使其比其他選擇低,從而提高他們的選擇概率。最近,有一個建議,第一個獎勵可以用來重置初始條件[引用需要]。根據這個想法,第一次採取行動是用來設定的價值,這將允許即時學習的情况下,固定的確定性回報。令人驚訝的是,該復位初始條件(RIC)的方法似乎是人類的行為在重複的二元選擇實驗一致。[ 4 ]
正在翻譯中..
 
其它語言
本翻譯工具支援: 世界語, 中文, 丹麥文, 亞塞拜然文, 亞美尼亞文, 伊博文, 俄文, 保加利亞文, 信德文, 偵測語言, 優魯巴文, 克林貢語, 克羅埃西亞文, 冰島文, 加泰羅尼亞文, 加里西亞文, 匈牙利文, 南非柯薩文, 南非祖魯文, 卡納達文, 印尼巽他文, 印尼文, 印度古哈拉地文, 印度文, 吉爾吉斯文, 哈薩克文, 喬治亞文, 土庫曼文, 土耳其文, 塔吉克文, 塞爾維亞文, 夏威夷文, 奇切瓦文, 威爾斯文, 孟加拉文, 宿霧文, 寮文, 尼泊爾文, 巴斯克文, 布爾文, 希伯來文, 希臘文, 帕施圖文, 庫德文, 弗利然文, 德文, 意第緒文, 愛沙尼亞文, 愛爾蘭文, 拉丁文, 拉脫維亞文, 挪威文, 捷克文, 斯洛伐克文, 斯洛維尼亞文, 斯瓦希里文, 旁遮普文, 日文, 歐利亞文 (奧里雅文), 毛利文, 法文, 波士尼亞文, 波斯文, 波蘭文, 泰文, 泰盧固文, 泰米爾文, 海地克里奧文, 烏克蘭文, 烏爾都文, 烏茲別克文, 爪哇文, 瑞典文, 瑟索托文, 白俄羅斯文, 盧安達文, 盧森堡文, 科西嘉文, 立陶宛文, 索馬里文, 紹納文, 維吾爾文, 緬甸文, 繁體中文, 羅馬尼亞文, 義大利文, 芬蘭文, 苗文, 英文, 荷蘭文, 菲律賓文, 葡萄牙文, 蒙古文, 薩摩亞文, 蘇格蘭的蓋爾文, 西班牙文, 豪沙文, 越南文, 錫蘭文, 阿姆哈拉文, 阿拉伯文, 阿爾巴尼亞文, 韃靼文, 韓文, 馬來文, 馬其頓文, 馬拉加斯文, 馬拉地文, 馬拉雅拉姆文, 馬耳他文, 高棉文, 等語言的翻譯.

Copyright ©2024 I Love Translation. All reserved.

E-mail: