AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Can ChatGPT be used to generate scientific hypotheses?

Yang Jeong Parka,bDaniel KaplancZhichu RendChia-Wei HsudChanghao Lia,eHaowei XuaSipei LiaJu Lia,d,( )
Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA, 02139, USA
Institute of New Media and Communications, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea
Department of Condensed Matter Physics, Weizmann Institute of Science, Rehovot, 7610001, Israel
Department of Materials Science and Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA, 02139, USA
Global Technology Applied Research, JPMorgan Chase, 237 Park Ave, New York, NY, 10017, USA

Peer review under responsibility of The Chinese Ceramic Society.

Show Author Information

Abstract

We investigate whether large language models can perform the creative hypothesis generation that human researchers regularly do. While the error rate is high, generative AI seems to be able to effectively structure vast amounts of scientific knowledge and provide interesting and testable hypotheses. The future scientific enterprise may include synergistic efforts with a swarm of “hypothesis machines”, challenged by automated experimentation and adversarial peer reviews.

References

[1]

Appel K, Haken W. Every planar map is four colorable. Ill J Math 1977;21: 429.

[2]
Krenn M, Buffoni L, Coutinho B, Eppel S, Foster JG, Gritsevskiy A, et al. Predicting the future of AI with AI: high-quality link prediction in an exponentially growing knowledge network. 2022. arXiv: 221000881.
[3]
Bubeck S, Chandrasekaran V, Eldan R, Gehrke J, Horvitz E, Kamar E, et al. Sparks of artificial general intelligence: early experiments with GPT-4. 2023. arXiv: 230312712.
[4]

Ren Z, Ren Z, Zhang Z, Buonassisi T, Li J. Autonomous experiments using active learning and AI. Nat Rev Mater 2023;8:563–4.

[5]

Liu Z, Sun M, Lin Y, Xie R. Knowledge representation learning: a review. J Comput Res Dev 2016;53(2):247–61.

[6]
Suri K, Singh A, Mishra P, Rout SS, Sabapathy R. Language models sounds the death knell of knowledge graphs. 2023. arXiv: 230103980.
[7]

Andrich P, de las Casas CF, Liu X, Bretscher HL, Berman JR, Heremans FJ, et al. Long-range spin wave mediated control of defect qubits in nanodiamonds. Npj Quantum Inf 2017;3:28.

[8]

Layden D, Zhou S, Cappellaro P, Jiang L. Ancilla-free quantum error correction codes for quantum metrology. Phys Rev Lett 2019;122:040502.

[9]

Takamoto S, Okanohara D, Li Q-J, Li J. Towards universal neural network interatomic potential. J Materiomics 2023;9(3):447–54.

[10]

Krige DG. A statistical approach to some basic mine valuation problems on the Witwatersrand. J South Afr Inst Min Metall 1951;52(6):119–39.

[11]

Drori I, Zhang S, Shuttleworth R, Tang L, Lu A, Ke E, et al. A neural network solves, explains, and generates university math problems by program synthesis and few-shot learning at human level. Proc Natl Acad Sci USA 2022;119(32):e2123433119.

[12]
Radford A, Kim JW, Hallacy C, Ramesh A, Goh G, Agarwal S, et al. Learning transferable visual models from natural language supervision. 2021. arXiv: 210300020.
[13]
Liu H, Li C, Wu Q, Lee YJ. Visual instruction tuning. 2023. arXiv: 230408485.
[14]
Luccioni AS, Viguier S, Ligozat A-L. Estimating the carbon footprint of BLOOM, a 176B parameter language model. 2022. arXiv: 221102001.
[15]

Rao M, Tang H, Wu J, Song W, Zhang M, Yin W, et al. Thousands of conductance levels in memristors integrated on CMOS. Nature 2023;615:823–9.

[16]

Onen M, Emond N, Wang B-M, Zhang D-F, Ross FM, Li J, et al. Nanosecond protonic programmable resistors for analog deep learning. Science 2022;377:539–43.

[17]
Sipio RD, Huang JH, Chen SYC, Mangini S, Worring M. The dawn of quantum natural language processing. ICASSP 2022;23–27: 8612–6 May 2022.
[18]
Taori R, Gulrajani I, Zhang T, Dubois Y, Li X, Guestrin C, et al. Alpaca. 2023. https://crfm.stanford.edu/2023/03/13/alpaca.html. accessed 13 March 2023.
[19]

Morgan D, Pilania G, Couet A, Uberuaga BP, Sun C, Li J. Machine learning in nuclear materials research. Curr Opin Solid State Mater Sci 2022;26(2):100975.

[20]
Rui R.Z., Li J., Yan J. Applied energy special issue. MIT A+B Appl Energy Symp. 2019, 2020, 2021, 2022.
[21]
Ren Z, Zhang Z, Tian Y, Li J. CRESt - copilot for real-world experimental scientist. 2023. chemrxiv: 2023tnz1x.
[22]

Hanak JJ. The “multiple-sample concept” in materials research: synthesis, compositional analysis and testing of entire multicomponent systems. J Mater Sci 1970;5(11):964–71.

[23]

Burger B, Maffettone PM, Gusev VV, Aitchison CM, Bai Y, Wang X, et al. A mobile robotic chemist. Nature 2020;583(7815):237–41.

[24]
Boiko DA, MacKnight R, Gomes G. Emergent autonomous scientific research capabilities of large language models. 2023. arXiv: 230405332.
Journal of Materiomics
Pages 578-584
Cite this article:
Park YJ, Kaplan D, Ren Z, et al. Can ChatGPT be used to generate scientific hypotheses?. Journal of Materiomics, 2024, 10(3): 578-584. https://doi.org/10.1016/j.jmat.2023.08.007

104

Views

4

Crossref

1

Web of Science

0

Scopus

Altmetrics

Received: 10 June 2023
Revised: 24 August 2023
Accepted: 30 August 2023
Published: 18 September 2023
© 2023 The Authors.

This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

Return