Research article Special Issues

Deep reinforcement learning with emergent communication for coalitional negotiation games

  • Received: 19 December 2021 Revised: 17 January 2022 Accepted: 07 February 2022 Published: 07 March 2022
  • For tasks intractable for a single agent, agents must cooperate to accomplish complex goals. A good example is coalitional games, where a group of individuals forms coalitions to produce jointly and share surpluses. In such coalitional negotiation games, how to strategically negotiate to reach agreements on gain allocation is however a key challenge, when the agents are independent and selfish. This work therefore employs deep reinforcement learning (DRL) to build autonomous agent called DALSL that can deal with arbitrary coalitional games without human input. Furthermore, DALSL agent is equipped with the ability to exchange information between them through emergent communication. We have proved that the agent can successfully form a team, distribute the team's benefits fairly, and can effectively use the language channel to exchange specific information, thereby promoting the establishment of small coalition and shortening the negotiation process. The experimental results shows that the DALSL agent obtains higher payoff when negotiating with handcrafted agents and other RL-based agents; moreover, it outperforms other competitors with a larger margin when the language channel is allowed.

    Citation: Siqi Chen, Yang Yang, Ran Su. Deep reinforcement learning with emergent communication for coalitional negotiation games[J]. Mathematical Biosciences and Engineering, 2022, 19(5): 4592-4609. doi: 10.3934/mbe.2022212

    Related Papers:

  • For tasks intractable for a single agent, agents must cooperate to accomplish complex goals. A good example is coalitional games, where a group of individuals forms coalitions to produce jointly and share surpluses. In such coalitional negotiation games, how to strategically negotiate to reach agreements on gain allocation is however a key challenge, when the agents are independent and selfish. This work therefore employs deep reinforcement learning (DRL) to build autonomous agent called DALSL that can deal with arbitrary coalitional games without human input. Furthermore, DALSL agent is equipped with the ability to exchange information between them through emergent communication. We have proved that the agent can successfully form a team, distribute the team's benefits fairly, and can effectively use the language channel to exchange specific information, thereby promoting the establishment of small coalition and shortening the negotiation process. The experimental results shows that the DALSL agent obtains higher payoff when negotiating with handcrafted agents and other RL-based agents; moreover, it outperforms other competitors with a larger margin when the language channel is allowed.



    加载中


    [1] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, et al., Mastering the game of go with deep neural networks and tree search, Nature, 529 (2016), 484–489. https://doi.org/10.1038/nature16961 doi: 10.1038/nature16961
    [2] M. Moravcík, M. Schmid, N. Burch, V. Lisý, D. Morrill, N. Bard, et al., Deepstack: Expert-level artificial intelligence in no-limit poker, CoRR, abs/1701.01724.
    [3] M. V. Devarakonda, C. Tsou, Automated problem list generation from electronic medical records in IBM watson, in Proc. Twenty-Ninth AAAI Conf. Artif. Intell., (eds. B. Bonet, S. Koenig), (2015), 3942–3947.
    [4] D. Leech, Designing the voting system for the council of the european union, Public Choice, 113 (2002), 473–464. https://doi.org/10.1023/A:1020877015060 doi: 10.1023/A:1020877015060
    [5] O. Shehory, S. Kraus, Methods for task allocation via agent coalition formation, Artif. Intell., 101 (1998), 165–200. https://doi.org/10.1016/S0004-3702(98)00045-9 doi: 10.1016/S0004-3702(98)00045-9
    [6] Y. Zick, K. Gal, Y. Bachrach, M. Mash, How to form winning coalitions in mixed human-computer settings, in Proc. Twenty-Sixth Int. Joint Conf. Artif. Intell., IJCAI (ed. C. Sierra), (2017), 465–471. https://doi.org/10.24963/ijcai.2017/66
    [7] L. S. Shapley, M. Shubik, A method for evaluating the distribution of power in a committee system, Am. political Sci. Rev., 48 (1954), 787–792. https://doi.org/10.2307/1951053 doi: 10.2307/1951053
    [8] J. F. Banzhaf III, Weighted voting doesn't work: A mathematical analysis, Rutgers L. Rev., 19 (1964), 317.
    [9] L. Wu, S. Chen, X. Gao, Y. Zheng, J. Hao, Detecting and learning against unknown opponents for automated negotiations, in PRICAI 2021: Trends in Artificial Intelligence (eds. D. N. Pham, T. Theeramunkong, G. Governatori and F. Liu), (2021), 17–31. https://doi.org/10.1007/978-3-030-89370-5_2
    [10] X. Gao, S. Chen, Y. Zheng, J. Hao, A deep reinforcement learning-based agent for negotiation with multiple communication channels, in 2021 IEEE 33nd Int. Conf. Tools with Artif. Intell. (ICTAI), (2021), 868–872. https://doi.org/10.1109/ICTAI52525.2021.00139
    [11] C. Gao, J. Liu, Network-based modeling for characterizing human collective behaviors during extreme events, IEEE Trans. Syst. Man Cybern. Syst., 47 (2017), 171–183. https://doi.org/10.1109/TSMC.2016.2608658
    [12] H. Mao, Z. Zhang, Z. Xiao, Z. Gong, Y. Ni, Learning multi-agent communication with double attentional deep reinforcement learning, Auton. Agents Multi Agent Syst., 34 (2020), 32. https://doi.org/10.1007/s10458-020-09455-w doi: 10.1007/s10458-020-09455-w
    [13] J. N. Foerster, Y. M. Assael, N. De Freitas, S. Whiteson, Learning to communicate with deep multi-agent reinforcement learning, arXiv prints, arXiv: 1605.06676.
    [14] P. Peng, Y. Wen, Y. Yang, Q. Yuan, Z. Tang, H. Long, et al., Multiagent bidirectionally-coordinated nets: Emergence of human-level coordination in learning to play starcraft combat games, arXiv prints, arXiv: 1703.10069.
    [15] T. Eccles, Y. Bachrach, G. Lever, A. Lazaridou, T. Graepel, Biases for emergent communication in multi-agent reinforcement learning, in Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, Vancouver, BC, Canada (eds. H. M. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. B. Fox, R. Garnett), (2019), 13111–13121. https://dl.acm.org/doi/10.5555/3454287.3455463
    [16] E. Hughes, T. W. Anthony, T. Eccles, J. Z. Leibo, D. Balduzzi, Y. Bachrach, Learning to resolve alliance dilemmas in many-player zero-sum games, in Proceedings of the 19th International Conference on Autonomous Agents and Multiagent Systems, AAMAS '20, (eds. A. E. F. Seghrouchni, G. Sukthankar, B. An, N. Yorke{-}Smith), (2020), 538–547.
    [17] T. Matthews, S. Ramchurn, G. Chalkiadakis, Competing with humans at fantasy football: Team formation in large partially-observable domains, in Proc. AAAI Conf. Artif. Intell., 26 (2012). https://ojs.aaai.org/index.php/AAAI/article/view/8259
    [18] Y. Bachrach, R. Everett, E. Hughes, A. Lazaridou, J. Z. Leibo, M. Lanctot, et al., Negotiating team formation using deep reinforcement learning, Artif. Intell., 288 (2020), 103356. https://doi.org/10.1016/j.artint.2020.103356 doi: 10.1016/j.artint.2020.103356
    [19] J. W. Crandall, M. Oudah, F. Ishowo-Oloko, S. Abdallah, J. F. Bonnefon, M. Cebrian, et al., Cooperating with machines, Nat. Commun., 9 (2018), 1–12. https://doi.org/10.1038/s41467-017-02597-8
    [20] K. Cao, A. Lazaridou, M. Lanctot, J. Z. Leibo, K. Tuyls, S. Clark, Emergent communication through negotiation, in 6th Int. Conf. Learn. Represent., ICLR 2018, Vancouver, Conference Track Proceedings, 2018.
    [21] Y. Shoham, K. Leyton-Brown, Multiagent systems: Algorithmic, game-theoretic, and logical foundations, Cambridge University Press, 2008. https://dl.acm.org/doi/abs/10.1145/1753171.1753181
    [22] D. K. Kim, S. Omidshafiei, J. Pazis, J. P. How, Crossmodal attentive skill learner: learning in atari and beyond with audio–-video inputs, Auton. Agent. Multi Agent Syst., 34 (2020), 1–21. https://doi.org/10.1007/s10458-019-09439-5 doi: 10.1007/s10458-019-09439-5
    [23] R. Su, Y. Zhu, Q. Zou, L. Wei, Distant metastasis identification based on optimized graph representation of gene interaction patterns, Brief. Bioinform., 23, (2022), bbab468. http://doi.org/10.1093/bib/bbab468
    [24] J. Liu, R. Su, J. Zhang, L. Wei, Classification and gene selection of triple-negative breast cancer subtype embedding gene connectivity matrix in deep neural network, Brief. Bioinform., 22, (2021), bbaa395. https://doi.org/10.1093/bib/bbaa395
    [25] A. Rubinstein, Perfect equilibrium in a bargaining model, Econometrica, 50 (1982), 97–109.
    [26] S. Chen, G. Weiss, An intelligent agent for bilateral negotiation with unknown opponents in continuous-time domains, ACM Trans. Auton. Adapt. Syst., 9 (2014), 16: 1–16: 24. https://dl.acm.org/doi/10.1145/2629577
    [27] S. Chen, G. Weiss, An approach to complex agent-based negotiations via effectively modeling unknown opponents, Expert Syst. Appl., 42 (2015), 2287–2304. https://doi.org/10.1016/j.eswa.2014.10.048 doi: 10.1016/j.eswa.2014.10.048
    [28] K. Dautenhahn, Socially intelligent robots: dimensions of human–robot interaction, Philos. Trans. R. Soc. B Biol. Sci., 362 (2007), 679–704. https://doi.org/10.1098/rstb.2006.2004 doi: 10.1098/rstb.2006.2004
    [29] S. Chen, Y. Cui, C. Shang, J. Hao, G. Weiss, Onecg: Online negotiation environment for coalitional games, in Proc. 18th Int. Conf. Auton. Agent. MultiAg. Syst., (2019), 2348–2350. https://dl.acm.org/doi/10.5555/3306127.3332108
    [30] G. A. Rummery, M. Niranjan, On-line Q-learning using connectionist systems, University of Cambridge, Department of Engineering Cambridge, UK, 1994.
    [31] R. Su, X. Liu, L. Wei, Q. Zou, {Deep-Resp-Forest}: A deep forest model to predict anti-cancer drug response, Methods, 166 (2019) 91–102. https://doi.org/10.1016/j.ymeth.2019.02.009
    [32] R. Su, X. Liu, Q. Jin, X. Liu, L. Wei, Identification of glioblastoma molecular subtype and prognosis based on deep mri features, Knowl. Based Syst., 232 (2021), 107490. https://doi.org/10.1016/j.knosys.2021.107490 doi: 10.1016/j.knosys.2021.107490
    [33] R. Su, H. Wu, B. Xu, X. Liu, L. Wei, Developing a multi-dose computational model for drug-induced hepatotoxicity prediction based on toxicogenomics data, IEEE/ACM Trans. Comput. Biol. Bioinform., 16 (2018), 1231–1239. https://doi.org/10.1109/TCBB.2018.2858756 doi: 10.1109/TCBB.2018.2858756
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1621) PDF downloads(83) Cited by(4)

Article outline

Figures and Tables

Figures(5)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog