| 研究生: |
林羿霈 Lin, Yi-Pei |
|---|---|
| 論文名稱: |
深度強化學習能否勝過 KD 技術指標和買入持有交易策略?以比特幣、黃金和股票指數為例 Can Deep Reinforcement Learning outperform KD Technical Indicators and Buy-and-hold trading strategies? Bitcoin, Gold and Stock Index as Examples |
| 指導教授: |
劉裕宏
Liu, Yu-Hong |
| 學位類別: |
碩士 Master |
| 系所名稱: |
管理學院 - 會計學系 Department of Accountancy |
| 論文出版年: | 2021 |
| 畢業學年度: | 109 |
| 語文別: | 英文 |
| 論文頁數: | 39 |
| 中文關鍵詞: | 深度強化學習 、Q學習 、股價預測 、KD技術指標 |
| 外文關鍵詞: | Deep Reinforcement Learning, Q-learning, Stock Price Prediction, KD Technical Indicator |
| 相關次數: | 點閱:247 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
預測價格變動的高準確度通常很難實現,且投資者常會受到情緒、偏見及市場動態等因素影響,透過機器學習能在短時間做出簡潔且準確的預測,有助於投資者分析價格變動並在適當時間點交易。本篇論文將深度強化學習應用在金融環境,相較於監督式學習,深度強化學習可以透過不斷與環境互動來適應複雜的金融環境,並且快速適應新市場環境使策略保持有效。我將投資者一直關心的最大化報酬以深度Q學習來實現,Q學習是一種無模型的強化學習,他不必預先建立模型,而是真實地和環境互動並不斷自己更新參數來學習,並且執行三種交易動作:買入、賣出和持有,最後計算在資料期間內自動交易所獲得的總報酬。我將測試的歷史價格得到的總報酬,比較買入持有策略及KD技術指標策略的總報酬,我們可以發現深度Q學習的結果在各種股價的交易表現都比另外兩種策略好,並且應用於去中心化且無交易日限制的比特幣股價的效果最佳。
High accuracy of predicting price changes is usually difficult to achieve, and investors are often affected by factors such as emotions, prejudices, and market dynamics. Machine learning can make simple and accurate predictions in a short time, which helps investors analyze price changes and trade at an appropriate point in time. This paper applies deep reinforcement learning to the financial environment. Compared with supervised learning, deep reinforcement learning can adapt to the complex financial environment through continuous interaction with the environment, and quickly adapt to the new market environment to keep the strategy effective. I use Deep Q-Learning to maximize the rewards that investors have always cared about. Q-Learning is a type of model-free reinforcement learning. It does not need to build a model in advance, but interacts with the environment actually and constantly updates its parameters to learn, and perform three types of transactions: Buy, sell, and hold. Finally, calculate the total reward for automatic exchange during the data period. I will compare the total rewards of the tested historical prices and the total rewards of the Buy and Hold strategy and the KD indicator strategy. We can find that the results of Deep Q-Learning perform better than the other two strategies in various stock prices, and it works best when applied to Bitcoin stock prices that are decentralized and have no trading day restrictions.
Athey, S., and Imbens, G. W., 2019. Machine learning methods economists should know about. Working paper.
Awoke, T., Rout, M., Mohanty, L., and Satapathy, S., C., 2021. Bitcoin price prediction and analysis using deep learning models. In Communication Software and Networks, 631-640.
Bao, D., and Yang, Z., 2008. Intelligent stock trading system by turning point confirming and probabilistic reasoning. International Journal of Expert Systems with Applications, 34 (1), 620-627.
Carta, S., Ferreira, A., Podda, A., S., Recupero, D., R., and Sanna, A., 2021. Multi-DQN: An ensemble of deep Q-learning agents for stock market forecasting. Expert Systems with Applications, 164, 113820.
Chakole, J., B., Kolhe, M., S., Mahapurush, G., D., Yadav, A., and Kurhekar, M., P., 2020. A Q-learning agent for automated trading in equity stock markets. Expert Systems with Applications, 163, 113761.
Coe, T., S., and Laosethakul, K., 2010. Should individual investors use technical trading rules to attempt to beat the market? American Journal of Economics and Business Administration, 2 (3), 201-209.
Corazza, M., Fasano, G., Gusso, R., and Pesenti, R., 2019. A comparison among reinforcement learning algorithms in financial trading systems. Working Paper.
Deng, Y., Bao, F., Kong, Y., Ren, Z., and Dai, Q., 2017. Deep direct reinforcement learning for financial signal representation and trading. IEEE Transaction on Neural Networks and Learning Systems, 28 (3), 653-664.
Duff, M. O., 1995. Q-learning for bandit problems. In Proceedings of the 12th International Conference on Machine Learning, 209-217.
Gai, K., Qiu, M., Sun, X., and Zhao, H., 2018. A survey on fintech. Journal of Network and Computer Applications, 103, 262-273.
García-Galicia, M., Carsteanu, A., A., and Clempner, J., B., 2019. Continuous-time reinforcement learning approach for portfolio management with time penalization. Expert Systems with Applications, 129, 27-36.
Garcia, D., and Schweitzer, F., 2015. Social signals and algorithmic trading of Bitcoin. Royal Society Open Science, 2 (9), 150288.
Giamouridis, D., and Vrontos, I., D., 2007. Hedge fund portfolio construction: A comparison of static and dynamic approaches. Journal of Banking & Finance, 31 (1), 199-217.
Hu, Y., J., and Lin, S., J., 2019. Deep reinforcement learning for optimizing finance portfolio management. In 2019 Amity International Conference on Artificial Intelligence, 14-20.
Jeong, G., and Kim, H., Y., 2019. Improving financial trading decisions using deep Q-learning: Predicting the number of shares, action strategies, and transfer learning. Expert systems with applications, 117, 125-138.
Jiang, Z., Xu, D., and Liang, J., 2017. A deep reinforcement learning framework for the financial portfolio management problem. Working paper.
Lahmiri, S., and Bekiros, S., 2019. Cryptocurrency forecasting with deep learning chaotic neural networks. Chaos, Solitons and Fractals, 118, pp. 35-40.
Lai, Y.L., 2009. Using reinforcement learning to establish Taiwan stock index future intra-day trading strategies. Master’s thesis.
Liu, Y., and Tsyvinski, A., 2021. Risks and returns of cryptocurrency. The Review of Financial Studies, 34 (6), 2689-2727.
Madan, I., Saluja, S., and Zhao, A., 2015. Automated bitcoin trading via machine learning algorithms. Working Paper.
Makarov, I., and Schoar, A., 2020. Trading and arbitrage in cryptocurrency markets. Journal of Financial Economics, 135, 293-319.
Mallqui, D., C.A., and Fernandes, R., A.S., 2019. Predicting the direction, maximum, minimum and closing prices of daily bitcoin exchange rate using machine learning techniques. Applied Soft Computing Journal, 75, 596-606.
Mehtab, S., Sen, J., and Dutta, A., 2020. Stock price prediction using machine learning and LSTM-based deep learning models. Machine Learning and Metaheuristics Algorithms and Applications, 1386, 88-106.
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., and Riedmiller, M., 2013. Playing atari with deep reinforcement learning. In Workshop on Deep Learning, NIPS, preprint arXiv:1312.5602.
Park, H., Sim, M.K., and Choi, D.G., 2020. An intelligent financial portfolio trading strategy using deep Q-learning. Expert System with Applications, 158, 113573.
Ricciardi, V., and Simon, H. K., 2000. What is behavioral finance? The Business, Education and Technology Journal, 2 (1), 26-34.
Schilling, L., and Uhlig, H., 2019. Some simple bitcoin economics. Journal of Monetary Economics, 106, 16-26.
Shen, W., Wang, J., Jiang, Y. G. and Zha, H., 2015. Portfolio Choices with Orthogonal Bandit Learning. In Proceedings of the International Joint Conference on Artificial Intelligence, 974-980.
Sockin, M., and Xiong, W., 2018. A Model of Cryptocurrencies. Working paper.
Sutton, R., S., and Barto, A., 1999. Reinforcement learning. Journal of Cognitive Neuroscience, 11, 126-134.
Tuyls, K., and Nowe, A., 2005. Evolutionary game theory and multi-agent reinforcement learning. The Knowledge Engineering Review, 20 (1), 63-90.
Wang, Y., Wang, D., Zhang, S., Feng, Y., Li, S., and Zhou, Q., 2017. Deep Q-trading. Technical Report.
Watkins, C. J. C. H., and Dayan, P., 1992. Q-learning. Machine Learning, 3, 279-292.
Weng, B., Lu, L., Wang, X., Megahed, F. M., and Martinez, W., 2018. Predicting short- term stock prices using ensemble methods and online data sources. Expert Systems with Applications, 112, 258-273.
Wu, M., and Diao, X., 2015. Technical analysis of three stock oscillators testing MACD, RSI and KDJ rules in SH & SZ Stock Markets. In 4th International Conference on Computer Science and Network Technology, 1, 320-323
Yang, S., Paddrik, M., Hayes, R., Todd, A., Kirilenko, A., Beling, P., and Scherer, W., 2012. Behavior Based Learning in Identifying High Frequency Trading Strategies. In Proceedings of the 2012 IEEE conference on computational intelligence for financial engineering & economics, 1-8.
Zeng, Y. and Klabjan, D., 2018. Portfolio Optimization for American Options. Journal of Computational Finance, 22, 37-64.
校內:2026-06-30公開