Please wait a minute...
文章检索
复杂系统与复杂性科学  2019, Vol. 16 Issue (1): 43-53    DOI: 10.13306/j.1672-3813.2019.01.005
  本期目录 | 过刊浏览 | 高级检索 |
基于DQN的企业创业创新自主体模拟
李睿1, 王铮1,2
1.华东师范大学地理信息科学教育部重点实验室,上海200241;
2.中国科学院科技政策与管理科学研究所,北京100080
Agent-Based Simulation of Enterprise Entrepreneurship and Innovation Based on DQN
LI Rui1, WANG Zheng1,2
1.Key Laboratory of Geographical Information Science, Ministry of State Education of China, East China Normal University, Shanghai 200241, China;
2.Institute of Policy and Management Science, Chinese Academy of Sciences, Beijing 100080, China
全文: PDF(1651 KB)  
输出: BibTeX | EndNote (RIS)      
摘要 基于自主体的计算经济学(ACE),利用自主体模型构建了一个以企业行为为基础的经济系统模型,试图解决企业创新与创业结合的动力学问题和政策问题。在文章构建的企业创业创新经济系统中,作为企业自主体行动的自主体行为算法是采用人工智能的DQN算法进行自适应模拟的。模拟得到结论:相比于没有自适应行为的企业自主体,具有自适应行为的企业自主体能够更好地通过对环境和自身状态的评估,进行正确的企业决策。
服务
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章
李睿
王铮
关键词 DQN自适应学习自主体模拟技术进步企业决策    
Abstract:Based on ACE (Agent-based Computational Economics), this paper uses an agent-based model to build an economic system model based on enterprise behavior, and tries to solve the dynamic problem and policy problem of combining enterprise innovation with entrepreneurship. In the economic system of enterprise entrepreneurship and innovation constructed in this paper, the agent behavior algorithm, which acts as the enterprise agent, adopts the artificial intelligence DQN algorithm for self-adaptive simulation. The simulation results show that compared with the enterprise agent without self-adaptive behavior, the enterprise agent with self-adaptive behavior is able to make correct business decisions by evaluating the environment and its own state.
Key wordsDQN    self-adaptive learning    agent-based simulation    technological advance    business decisions
收稿日期: 2018-10-23      出版日期: 2019-07-04
ZTFLH:  TB3  
基金资助:国家自然科学基金(D010701)
通讯作者: 王铮(1954),男,云南陆良人,研究员,博导,主要研究方向为经济计算、地理计算、区域科学与管理。   
作者简介: 李睿(1995),男,浙江东阳人,硕士,主要研究方向为地理计算。
引用本文:   
李睿, 王铮. 基于DQN的企业创业创新自主体模拟[J]. 复杂系统与复杂性科学, 2019, 16(1): 43-53.
LI Rui, WANG Zheng. Agent-Based Simulation of Enterprise Entrepreneurship and Innovation Based on DQN. Complex Systems and Complexity Science, 2019, 16(1): 43-53.
链接本文:  
http://fzkx.qdu.edu.cn/CN/10.13306/j.1672-3813.2019.01.005      或      http://fzkx.qdu.edu.cn/CN/Y2019/V16/I1/43
[1] Bargigli,Leonardo,Tedeschi, et al. Major trends in agent-based economics[J]. Journal of Economic Interaction & Coordination, 2013, 8(2): 211217.
[2] Tesfatsion L. Guest editorial agent-based modeling of evolutionary economic systems[J]. Ieee Transactions on Evolutionary Computation, 2001, 5(5): 437441.
[3] Wang Z,Liu T,Dai X. Effect of policy and entrepreneurship on innovation and growth: an agent-based simulation approach[J]. Proceedings of Japan Society of Regional Science, 2010, 40(1): 1926.
[4] 顾高翔,王铮,姚梓璇. 基于自主体的经济危机模拟[J]. 复杂系统与复杂性科学, 2011, 08(4): 2735.
[5] Bures V,Tucnik P. Complex agent-based models: application of a constructivism in the economic research[J]. E+ M Ekonomie a Management, 2014, 17(3): 152168.
[6] Monett D,Navarro-barrientos JE. Simulating the fractional reserve banking using agent-based modelling with Netlogo[C]∥Procedings of the 2016 Federated Comference on Computer Science and Information Systems (Fed CSIS). Gdańsk, Poland: IEEE, 2016: 14671470.
[7] Kouwenberg R,Zwinkels RC. Endogenous price bubbles in a multi-agent system of the housing market[J]. Plos One, 2015, 10(6): e0129070.
[8] Chen S. An Evolutionary game study of an ecological industry chain based on multi-agent simulation: a case study of the poyang lake eco-economic zone[J]. Sustainability, 2017, 9(7): 1165.
[9] Tang L,Wu J,Yu L, et al. Carbon emissions trading scheme exploration in China: a multi-agent-based model[J]. Energy Policy, 2015, 81: 152169.
[10] Zhang J. Growing silicon valley on a landscape: an agent-based approach to high-tech industrial clusters[J]. Journal of Evolutionary Economics, 2003, 13(5): 529548.
[11] 戴霄晔,刘涛,王铮. 面向产业创业创新政策模拟的ABS系统开发[J]. 复杂系统与复杂性科学, 2007, 4(2): 6270.Dai Xiaoye, Liu Tao, Wang Zheng. Start-ups technology strategy oriented agent-based simulation system development[J]. Complex Systems and Complexity Science, 2007, 4(2): 6270.
[12] Cjch W,Dayan P. Q-learning[J]. Machine Learning, 1992, 8(3): 279292.
[13] Mnih V,Kavukcuoglu K,Silver D, et al. Playing atari with deep reinforcement learning[DB/OL].[2018-08-03] .http:∥arxiv.org/abs/1312.5602VI.
[14] Mnih V,Kavukcuoglu K,Silver D, et al. Human-level control through deep reinforcement learning.[J]. Nature, 2015, 518(7540): 529.
[15] Van Hasselt H,Guez A,Silver D. Deep Reinforcement Learning with Double Q-learning[DB/OL].[2018-08-03] .http:∥arxiv.org/abs/1509.06461.
[16] Schaul T,Quan J,Antonoglou I, et al. Prioritized experience replay[DB/OL].[2018-08-03] .http:∥arxiv.org/abs/1511.05952.
[17] Wang Z,Schaul T,Hessel M, et al. Dueling network architectures for deep reinforcement learning[DB/OL].[2018-08-03] .http:∥arxiv.org/abs/1511.06581.
[18] Thrun S,Schwartz A. Issues in using function approximation for reinforcement learning[J]. Proceedings of the Fourth Connectionist Models Summer School.Hillsdale, NJ, USA: Lawrence Erlbaum Publiher,1993.
[1] 李小珂, 赵紫娟, 郭强, 刘建国, 李仁德. 基于文本挖掘的网络科学会议主题研究[J]. 复杂系统与复杂性科学, 2018, 15(3): 27-38.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed