Open Access
Stock Market Prediction and Investment using Deep Reinforcement Learning- a Continuous Training Pipeline
Author(s) -
Pallavur V. Sivakumar,
Debjyoti Guha,
Hitesh K. Agarwal,
Kothiya Meetkumar Harshadbhai
Publication year - 2020
Publication title -
international journal of engineering and advanced technology
Language(s) - English
Resource type - Journals
ISSN - 2249-8958
DOI - 10.35940/ijeat.b2034.1210220
Subject(s) - reinforcement learning , pipeline (software) , computer science , portfolio , stock market , artificial intelligence , investment performance , trading strategy , stock (firearms) , expected return , machine learning , return on investment , operations research , econometrics , economics , financial economics , microeconomics , engineering , mechanical engineering , paleontology , horse , production (economics) , biology , programming language
Fluctuating nature of the stock market makes it too hard to predict the future market trends and where to invest. Hence, there is a need for a cross application backed by an ultramodern architecture. With the latest advancement in Deep Reinforcement Learning, successive practical problems can be modeled and solved with human level accuracy. In this paper, an agent-based Deep Deterministic Policy Gradient system is proposed to imitate professional trading strategies which is a state-of-the-art framework that can predict and make investment of customers money with high return. In addition to this, dealing with interday trading strategy, the proposed architecture is designed as a continuous training pipeline so that the model saved is up-to-date with the recent market trends by giving higher accuracy in prediction. The framework outperforms the base reinforcement learning algorithms and maximizes portfolio return. The experimental result shows how natural language processing and statistical prediction can help us to choose the trending stock based on news headlines and historical data so that model invests money only in the market which gives higher return. To evaluate the performance of the proposed method, comparison of our portfolio results was done with various other reinforcement learning algorithms by keeping the same configuration.