Graph Neural Network Based Macroscale AI Model for Perovskite Solar Cell Power Conversion Efficiency Prediction
Article Main Content
Perovskite solar cells have emerged as an alternative to traditional solar cells to solve the problem of low cost-effectiveness. Perovskites, being very flexible to produce, are difficult to test for every type. Therefore, a model that predicts the performance of perovskite solar cells is imperative for further development of these materials. In this study, we create a graph-neural network-inspired artificial intelligence model that can predict the power conversion efficiency of a perovskite solar cell based on the components of the perovskite solar cells. The data was retrieved from the Perovskite Dataset Project, gathering all solar cells that had the perovskite structure ABX3. Different graph convolutional operations and aggregation algorithms were tested for the model. Overall, a model that utilized a Recurrent Neural Network with a Long Short-Term Memory implementation and a Graph Attention operation achieved a very low MAE of 1.147 and a low RMSE of 1.6971 when predicting on the un-normalized testing data set. This study demonstrates the capabilities of AI to create a macroscale perovskite solar cell prediction model and aims to serve as a baseline model for further complex models.
Introduction
Research on renewable energy is currently at the peak of interest with increasing pollution, of which photovoltaic cells have been a major component of [1]. Solar cells can be divided into three generations; the first and second-generation solar cells are based on silicon wafers and thin films, respectively, while third-generation solar cells use organic, inorganic, or hybrid structures [2]. The first and second-generation solar cells have limitations in high cost, elaborate fabrication process, and environmental unfriendliness. This led to the rapid developments of third-generation cells, including polycrystalline-silicon solar cells, single-crystalline silicon solar-cells, CdTe-based solar-cells CIGS solar cells, organic-photovoltaics, quantum dot sensitized solar cells, and perovskite solar cells [3].
Perovskite solar cells (PSCs) have been studied extensively throughout the previous decade due to Perovskite’s great optoelectronic properties such as a suitable photovoltaic bandgap range, high tunability, low exciton binding energy, and ease of fabrication [4]–[7]. The extensive research on PSCs allowed for the Power Conversion Efficiency (PCE) of PSCs has now increased to as high as 31% [8]. PSC development, however, has been limited by trial-and-error methods that require significant amounts of time, material, and human resources [9]. One common methodology to battle this trial-and-error approach is the utilization of Density Functional Theory calculations [10]. However, these methods still take numerous hours to compute predictions for one material and seem to fall behind current needs [11].
Machine Learning (ML) has now emerged as a suitable method for predicting perovskite properties such as perovskite bandgaps, chemical stability, structural properties, and formality [12]–[16]. In addition, PCE of third-generation solar cells—Organic Solar Cells, Dye-synthesized solar cells and PSCs—were predicted using ML [9], [17]–[22].
Graph Neural Networks (GNNs) are ML algorithms that focus on making predictions on graph-structured data, where graphs are simply mathematical representations of nodes connected by edges [23]. GNNs are often utilized for cases with minimal feature engineering, yet they can display high representative capabilities [24]. These GNNs’ incredible representative power on graph-structured data allows for their large area of usage, including the field of materials science. Thus far, GNNs have been utilized for novel drug discovery, determining fuel ignition quality, molecule property prediction and crystal structure property prediction [25]–[28].
Due to the performances in materials science, GNNs have been utilized for perovskite property prediction such as bandgap prediction, synthesizability, and stability [29]–[31]. Although perovskite properties have often been predicted using GNNs, there has not, to the author’s best knowledge, an approach that generalizes GNNs to PSC property prediction, specifically PCE prediction, themselves. In addition, although these studies utilize GNNs, they still conduct extensive feature engineering to predict several factors of perovskites. This study aims to create a baseline model that generalizes GNNs from perovskite property prediction to PSC PCE prediction with minimal feature engineering, taking advantage of the power of GNNs.
Methodology
Data Acquisition and Processing
The data has been acquired from the Perovskite Database Project, which is a database that acquires data of PSCs using FAIR principles [32]. The authors collected PSC data from more than 10,000 solar cells, including their device architecture, fabrication methods, and performance parameters such as the PCE or open circuit voltage. Of the PSC data collected in the database, this study considers perovskites with structures ABX3 and an n-i-p structure to demonstrate the feasibility of the approach. Each data point included the cathode, electron transport layer (ETL), perovskite structure, hole transport layer (HTL), and anode in plain English without any optoelectric properties added. Only solar cells with 10 or more different solar cells with the same type of ETL, HTL, anode, and cathode have been selected for training.
Overall, the selection of each type of material was; Au, Carbon, Ag, and Al for the anode; FTO, ITO, and Graphene for the cathode; Spiro-MeOTAD, PEDOT: PSS, PTAA, NiO-c, CuSCN, CuPC, and P3HT for the HTL; and Ti O2-c, SnO2-c, TiO2-np, SnO2-np, PCBM-60, C-60, ZnO-c, ZnO-np, and PCBM-70 for the ETL. The Perovskite A site had three different options (MA, FA, and Cs); the B site had two different options (Pb, Sn); and the X site had two different options (Br, I). There were 343 data points, which were split as 8:1:1 train-validation-test.
Each unique piece of data in the anode, cathode, ETL, and HTL is assigned a numerical index. For perovskite structures, the perovskite composition at each site was encoded into an array, with the values being the ratios of each element. The value was encoded as −∞ if the ratio was zero. The arrays were then normalized between zero and one using the softmax operation.
Model Architecture
The ML model utilizes a GNN combined with a sequential data operation. The values of each PSC component were encoded using an embedding lookup table with a dimensionality of 512, which is the for this study.
When creating embeddings for the perovskite composition, different types of elements were initialized by different vectors, and the ratio of each element was used to multiply and component-wise add different vectors corresponding to each site. Embeddings were then fed into a GNN with two of the same Graph Convolutional Operations, which directly fed into a batch normalization layer and a ReLU activation function. One convolutional layer expands the dimensionality to × 2 and the other reduces the dimensionality back to . The operations tested were: Graph Convolution (GCNConv), Graph Neural Network (GraphConv), Residual Gated Graph Convolution (resggc), Simple Graph Convolution (SGConv), Graph Attention Networks (GATv2), and Graph Transformers (GraphTrans) [33]–[38].
After the two convolutions, the graph representation was created using global mean pooling. The embeddings and graph representation were then combined and fed into a sequential data operation, in which this study examined the effects of an RNN with an LSTM implementation, a vanilla RNN, and a transformer decoder [39]–[41]. The LSTM and RNN had five cells each, and the decoder had six stacked layers.
The output of the sequential data operation is finally inputted into a feed-forward neural network, which is a four-layer perceptron that utilizes the GELU activation function and dropout with a probability of 0.2 after each layer to create maximum variance to prevent overfitting. The output of the feed-forward network is then inputted into a final linear layer to reduce the number down to a single integer. Fig. 1A describes the general architecture of the model utilized, and Fig. 1B describes the Graph Neural Network Architecture.
Fig. 1. Underlying architectures: A) Overview of entire model, B) Graph neural network.
The model was created using PyTorch and PyTorch Geometric, and it was trained for 200 epochs with a ReduceLRonPlateau callback with patience 4 [42], [43]. The AdamW optimizer was used with a learning rate of 2.5e-4. The loss function utilized was Smooth L1 Loss.
Metrics
To evaluate the model on the testing set, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) were used as metrics. MAE and RMSE are widely used metrics to detect anomalies in predictions [44]. The output predictions were unnormalized so that the results accurately reflect the error. The deviation between the relative MAE and RMSE is considered to determine the probability of the model predicting outliers.
Results
Transformer Decoder
Fig. 2A and B show the unnormalized MAE and RMSE value per algorithm. The model using the GATv2 operation shows the lowest MAE of 1.6822 and the lowest RMSE of 1.9994. The small difference between MAE and RMSE suggests a few outliers in the predictions. The model using the Resggc operation had the highest MAE of 3.7947, but the model using the GraphTrans operation had the highest RMSE of 5.762. The model using the GraphTrans operation had the second-highest MAE of 3.6082, and the model using the Resggc operation had the second-highest RMSE of 4.2626.
Fig. 2. Errors of models with each graph convolution algorithm using a transformer decoder as sequential operator: A) MAE, B) RMSE.
Fig. 3A shows the predictions of the model using the GATv2 operation versus the true values of PCE. The model displays relatively accurate predictions with a few outliers, especially when the true values deviate towards the lower end of the data. Fig. 3B and C show the predictions of the model using the GraphTrans and resggc operation, respectively. It can be seen that the model using the GraphTrans operation’s output is much more dispersed than that of the model using the resggc operation, which adds an explanation to the MAE and RMSE values; the GraphTrans-based model has more outliers than the resggc-based model, which would be reflected by a high RMSE.
Fig. 3. Predictions of models using a transformer decoder as sequential operator with different graph convolution algorithms: A) GATv2, B) GraphTrans, C) resggc.
LSTM Based Models
Fig. 4A and B show the unnormalized MAE and RMSE values per operation used. The GraphConv graph convolutional operation shows the lowest MAE of 1.127 and the lowest RMSE of 1.6971. The small difference between MAE and RMSE shows that there were few outliers in the predictions. The SGConv operation showed the highest MAE of 1.6159, but the highest RMSE was shown with the GraphTrans operation with an RMSE of 2.3789. The SGConv operation had the second-highest RMSE of 2.244, and the GraphTrans operation had the second-highest MAE of 1.3921. The large difference between the MAE and RMSE in the model using the GraphTrans operation signifies especially large outliers.
Fig. 4. Errors of models with each graph convolution algorithm using LSTM as sequential operator: A) MAE, B) RMSE.
The relatively large deviation is also shown for another operation; the GATv2 operation, which had the second-lowest MAE of 1.1471, showed the third-highest RMSE of 1.9911. This demonstrates outliers in the prediction of the GATv2 operations when compared to other operations.
Fig. 5A shows the model’s predictions using the GraphConv operation versus the true values of PCE. The model displays relatively accurate predictions with only a few outliers, mostly when the true values deviate towards the lower end of the data. Nevertheless, the model shows improved predictions compared to the best-performing model that uses the transformer decoder. Figs. 5B–5D show the predictions of the model using the SGConv operation, of the model using the GraphTrans operation, and of the model using the GATv2 operation, respectively. It can be observed that the predictions are undershooting, with the data points deviating more to the left than the results from the GraphConv operation using the model.
Fig. 5. Predictions of models using a LSTM as sequential operator with different graph convolution algorithms: A) GraphCon, B) SGConv, C) GraphTrans, and D) GATv2.
RNN Based Models
Fig. 6A and B show the unnormalized MAE and RMSE values per operation used. The model using the GraphTrans graph convolutional operation shows the lowest MAE of 1.477 and the lowest RMSE of 1.7905. The correlation in relative size between MAE and RMSE shows few outliers in the predictions. The model using the GCNConv operation showed the highest MAE of 2.7949 and the highest RMSE of 3.3725.
Fig. 6. Results of models with plain RNN as a sequential data operator: A) MAE, B) RMSE, C) predictions using GraphConv, D) predictions using the GCNConv.
Fig. 6C shows the model’s predictions using the GraphConv operation versus the true values of PCE. The model displays relatively accurate predictions with some outliers, especially when the true values are relatively low. Fig. 6D shows the model predictions when using the GCNConv operation. The model underpredicts the PCE values, as seen in the overall shift to the left. The RNN-based model produces more accurate predictions for data points with lower true values than the transformer decoder. However, overall, the model seems to perform worse than the model using the LSTM, as reflected in the MAE and RMSE values.
Discussions
This study reports an ML model that can produce incredibly accurate predictions based on a Graph Neural Network with a network that aggregates with a Long-Short term memory network. Perovskite solar cells promote great promise in the field of solar energy, and, to the author’s best knowledge, only a few models have aimed to predict the power conversion efficiency of perovskite solar cells. Perovskite materials prediction can be incredibly helpful in developing a solar cell that can present high power conversion efficiencies and be stable at a relatively low cost, effectively aiding humanity’s move towards renewable energy and further reducing human influence on the environment.
A Bayesian learning approach with the utilization of K Nearest Neighbors, Light Gradient Boosting Machine, Random Forest Regression, and Extremely Randomized Trees were examined in predicting the PCE of perovskite solar cells [45]. The team was able to obtain an MAE of 2.41, 2.33, 5.13, and 3.38 and an MSE of 3.43, 3.37, 29.89, and 3.38 for Light Gradient Boosting Machine, Extremely Randomized Trees, K Nearest Neighbors, and Random Forest Regression, respectively.
This study demonstrates a much lower MAE of 1.147 for the best-performing model, which used a GATv2 operation with an LSTM sequential data operator. The MSE of the best-performing model is 2.8801, which is lower than the MSE of the model proposed by Li et al. [45] conducted extensive feature extraction for each layer itself prior to implementing ML, whereas this study involves minimal feature engineering. The method used by Li et al. [45] allows for the prediction of perovskite-derived structures such as the double perovskite, showing higher applicability. However, the lower error shown in this work demonstrates the potential of the proposed model.
The XGBoost algorithm has often been proven effective for predicting PCE. A team of researchers built 16 ML models using 1072 experimental data from the literature to predict the photovoltaic parameters of PSCs from 17 fabrication process parameters of the PSCs. The experimental PCE and predicted PCE obtained from the XGBoost algorithm had an RMSE of 1.28 and a relatively high correlation (r = 0.768) [46]. Another team of researchers tested linear, logic-tree-based, gradient-based, discriminative-based, and attention-based neural network regressors to predict PCE of PSCs. The XGBoost algorithm obtained the lowest MAE of 1.52 [47]. The results from the study of Lu et al. portray lower error than that of this study, with the RMSE being 1.6971 [46].
One key difference is the features considered; the study of Lu et al. [46] considered the effects of changing the ETL electron mobility, with or without post-treatment of the electronic transport layer, with or without buffer layer on the perovskite film, the ratio of DMSO: DMF; the XlogP3, boiling point, viscosity of the antisolvent, the annealing temperature of the perovskite film, different additives and Cs, FA, MA, and Br ratio in the perovskite. In contrast, this study involved changing the material for every layer but did not conduct such extensive feature engineering. This methodological difference highlights the potential of this study’s approach to handle a broader input of PSCs, even if the initial model accuracy is slightly reduced. As such, the higher RMSE does not diminish the utility of the model but rather underscores the need for further optimization and feature engineering to fully exploit the expansive data.
The best-performing model in this study provides more reliable predictions than that of Mohanty and Palai [47] The team utilized feature engineering to select 26 total features and also added synthetic data to the dataset to further augment the data. In addition, the team utilizes data with all types of perovskite-based structures, which indicate higher generalizability and most likely contributed to the difference in loss between the two models. It is important to note that the study of Mohanty and Palai [47] study, when not conducting data augmentation, showed considerably higher errors, with the best-performing attention-based neural network showing an MAE of 2.28 and RMSE of 3.16. Despite some limitations, the difference in the scope of data provides a possible approach to explaining why the errors are much higher in the work of Mohanty and Palai [47].
Another group created different ML models trained on previously reported experimental data to predict perovskite bandgaps and further PSC PCE. Among the different ML models, the CatBoostRegressor performed best for both bandgap and efficiency approximations [13]. The model achieved an RMSE and MAE of 0.02625 and 0.0198, respectively.
Although the RMSE and MAE of the model proposed by Khan et al. may seem lower than that of this study, it is important to note that Khan et al. report the normalized MAE and RMSE, that is, the error when the efficiencies are measured in decimals rather than percentages [13]. Once the errors from this study are converted by simply dividing the error by 100, the model proposed in this study demonstrates error that is much lower than that of Khan et al., with a RMSE and MAE of 0.01697 and 0.01147. Khan et al. utilized a dataset with a fixed ETL and HTL, different deposition methods for the ETL and HTL, and the same perovskite structure limitations as this study. The data includes the deposition methods, which could contribute to the difference between the errors. This study includes a more comprehensive inclusion of all PSC components [13].
A team of researchers utilized the composition-based feature vector library such as Oliynyk, Magpie, and mat2vec to extract properties and performed regression using random forest, neural network, and gradient-boost decision tree [48]. The team obtained an RMSE of 4.2910, 4.2994, and 4.2932 for Random Forest models using Oliynyk, Magpie, and mat2vec, respectively, and an MAE of 3.3897, 3.3955, and 3.3925, respectively. The other algorithms showed less effectiveness than the Random Forest algorithm. The best-performing model of this study showed an MAE and RMSE of 1.147 and 1.6987, respectively, which is lower than that of Fukasawa et al. [48]. It is important to note that the data utilized was greatly generalized in the study of Fukasawa et al. [48], which likely contributed to this difference. Nevertheless, the drastic decrease in loss when compared to the study of Fukasawa et al. [48] further demonstrates the potential the model has.
Conclusion
This study showed that a Graph Neural network-based architecture that utilizes a GATv2 operation with an LSTM sequential data operator improved the overall prediction quality of PSC PCEs. However, there are several limitations. The proposed model has limited generalizability. As the task given was solely focused on PSCs with perovskite structures with strictly an ABX3 structure, with well-known anode, cathode, ETL, and HTL materials, without additives and with an n-i-p structure, the tasks the proposed model can do is limited. In addition, the proposed model cannot predict PSCs with double perovskite or perovskite-inspired structures and cannot incorporate additives.
To counter the lack of generalizability in perovskites, in future work more training data that includes perovskite-inspired structures is advised. To counter the lack of generalizability in the anode, cathode, ETL, and HTL materials, future work should conduct feature engineering prior to utilizing this model. A separate layer where the additives are considered should be added to the model architecture in future work. In addition, utilizing Explainable AI tools to understand where the model achieved its predictions can aid solar cell research [49].
Despite these limitations, the model proposed in this study shows great promise in its extremely low MAE values and must serve solely as the baseline model. With several improvements, the model could reach its full potential. This study demonstrated the ability of this architecture to make it successful compared with other architectures, providing a novel insight into a potential algorithm that can be used for modeling PSCs.
References
-
Tabor DP, Roch LM, Saikin SK, Kreisbeck C, Sheberla D, Montoya JH, et al. Accelerating the discovery of materials for clean energy in the era of smart automation. Nat Rev Mater. 2018;3:5–20. doi: 10.1038/s41578-018-0005-z.
Google Scholar
1
-
Yan J, Saunders BR. Third-generation solar cells: a review and comparison of polymer: fullerene, hybrid polymer and perovskite solar cells. RSC Adv. 2014;4:43286–314. doi: 10.1039/c4ra07064j.
Google Scholar
2
-
Shah N, Shah AA, Leung PK, Khan S, Sun K, Zhu X, et al. A review of third generation solar cells. Processes. 2023;11:1852. doi: 10.3390/pr11061852.
Google Scholar
3
-
De Wolf S, Holovsky J, Moon SJ, Löper P, Niesen B, Ledinsky M, et al. Organometallic halide perovskites: sharp optical absorption edge and its relation to photovoltaic performance. J Phys Chem Lett. 2013;5:1035–9. doi: 10.1021/jz500279b.
Google Scholar
4
-
Adjokatse S, Fang HH, Loi MA. Broadly tunable metal halide perovskites for solid-state light-emission applications. Mater Today. 2017;20:413–24. doi: 10.1016/j.mattod.2017.03.021.
Google Scholar
5
-
Lin Q, Armin A, Nagiri RCR, Burn PL, Meredith P. Electro-optics of perovskite solar cells. Nat Photonics. 2015;9:106–12. doi: 10.1038/nphoton.2014.284.
Google Scholar
6
-
Jung HS, Park N. Perovskite solar cells: from materials to devices. Small. 2014;11:10–25. doi: 10.1002/smll.201402767.
Google Scholar
7
-
Teixeira C, Spinelli P, Castriotta LA, Müller D, Öz S, Andrade L, et al. Charge extraction in flexible perovskite solar cell architectures for indoor applications–with up to 31% efficiency. Adv Funct Materials. 2022;32(40):2206761. doi: 10.1002/adfm.202206761.
Google Scholar
8
-
Hu J, Chen Z, Chen Y, Liu H, Li W, Wang Y, et al. Interpretable machine learning predictions for efficient perovskite solar cell development. Sol Energy Mater Sol Cells. 2024;271:112826. doi: 10.1016/j.solmat.2024.112826.
Google Scholar
9
-
Lejaeghere K, Bihlmayer G, Björkman T, Blaha P, Blügel S, Blum V, et al. Reproducibility in density functional theory calculations of solids. Science. 2016;351:aad3000. doi: 10.1126/science.aad3000.
Google Scholar
10
-
Lopez-Varo P, Jiménez-Tejada JA, García-Rosell M, Ravishankar S, Garcia-Belmonte G, Bisquert J, et al. Device physics of hybrid perovskite solar cells: theory and experiment. Adv Energy Mater. 2018;8(14):1702772. doi: 10.1002/aenm.201702772.
Google Scholar
11
-
Li Y, Lu Y, Huo X, Wei D, Meng J, Dong J, et al. Bandgap tuning strategy by cations and halide ions of lead halide perovskites learned from machine learning. RSC Adv. 2021;11:15688–94. doi: 10.1039/d1ra03117a.
Google Scholar
12
-
Khan A, Kandel J, Tayara H, Chong KT. Predicting the bandgap and efficiency of perovskite solar cells using machine learning methods. Mol Inform. 2024;43(2):e202300217. doi: 10.1002/minf.202300217.
Google Scholar
13
-
Schmidt J, Shi J, Borlido P, Chen L, Botti S, Marques MAL. Predicting the thermodynamic stability of solids combining density functional theory and machine learning. Chem Mater. 2017;29:5090–103. doi: 10.1021/acs.chemmater.7b00156.
Google Scholar
14
-
Saidi WA, Shadid W, Castelli IE. Machine-learning structural and electronic properties of metal halide perovskites using a hierarchical convolutional neural network. NPJ Comp Mater. 2020;6:36. doi: 10.1038/s41524-020-0307-8.
Google Scholar
15
-
Xu Q, Li Z, Liu M, Yin WJ. Rationalizing perovskite data for machine learning and materials design. J Phys Chem Lett. 2018;9:6948–54. doi: 10.1021/acs.jpclett.8b03232.
Google Scholar
16
-
Sahu H, Rao W, Troisi A, Ma H. Toward predicting efficiency of organic solar cells via machine learning and improved descriptors. Adv Energy Mater. 2018;8(24):1801032. doi: 10.1002/aenm.201801032.
Google Scholar
17
-
Eibeck A, Nurkowski D, Menon A, Bai J, Wu J, Zhou L, et al. Predicting power conversion efficiency of organic photovoltaics: models and data analysis. ACS Omega. 2021;6:23764–75. doi: 10.1021/acsomega.1c02156.
Google Scholar
18
-
Tomar N, Rani G, Dhaka VS, Surolia PK. Role of artificial neural networks in predicting design and efficiency of dye sensitized solar cells. Int J Energy Res. 2022;46:11556–73. doi: 10.1002/er.7959.
Google Scholar
19
-
Hui Z, Wang M, Yin X, Wang Y, Yue Y. Machine learning for perovskite solar cell design. Comput Mater Sci. 2023;226:112215. doi: 10.1016/j.commatsci.2023.112215.
Google Scholar
20
-
Gok EC, Yildirim MO, Haris MPU, Eren E, Pegu M, Hemasiri NH, et al. Predicting perovskite bandgap and solar cell performance with machine learning. Sol RRL. 2021;6(2):2100927. doi: 10.1002/solr.202100927.
Google Scholar
21
-
Hu Y, Hu X, Zhang L, Zheng T, You J, Jia B, et al. Machine-learning modeling for ultra-stable high-efficiency perovskite solar cells. Adv Energy Mater. 2022;12(41):2201463. doi: 10.1002/aenm.202201463.
Google Scholar
22
-
Khemani B, Patil S, Kotecha K, Tanwar S. A review of graph neural networks: concepts, architectures, techniques, challenges, datasets, applications, and future directions. J Big Data. 2024;11:18. doi: 10.1186/s40537-023-00876-4.
Google Scholar
23
-
Cong G, Fung V. Improving materials property predictions for graph neural networks with minimal feature engineering. Mach Learn: Sci Technol. 2023;4:035030. doi: 10.1088/2632-2153/acefab.
Google Scholar
24
-
Cheung M, Moura JMF. Graph neural networks for covid-19 drug discovery. 2020 IEEE International Conference on Big Data (Big Data), pp. 5646–8, 2020.
Google Scholar
25
-
Schweidtmann AM, Rittig JG, König A, Grohe M, Mitsos A, Dahmen M. Graph neural networks for prediction of fuel ignition quality. Energy Amp; Fuels. 2020;34:11395–407. doi: 10.1021/acs.energyfuels.0c01533.
Google Scholar
26
-
Ryu S, Lim J, Hong SH, Kim WY. Deeply learning molecular structure-property relationships using attention-and gate-augmented graph convolutional network. 2018. doi: 10.48550/ARXIV.1805.10988.
Google Scholar
27
-
Xie T, Grossman JC. Crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties. Phys Rev Lett. 2018;120(14):145301. doi: 10.1103/phys-revlett.120.145301.
Google Scholar
28
-
Omprakash P, Manikandan B, Sandeep A, Shrivastava RPV, Panemangalore DB. Graph representational learning for bandgap prediction in varied perovskite crystals. Comput Mater Sci. 2021;196:110530. doi: 10.1016/j.commatsci.2021.110530.
Google Scholar
29
-
Gu GH, Jang J, Noh J, Walsh A, Jung Y. Perovskite synthesizability using graph neural networks. NPJ Comp Mater. 2022;8:71. doi: 10.1038/s41524-022-00757-z.
Google Scholar
30
-
Eremin RA, Humonen IS, Kazakov AA, Lazarev VD, Pushkarev AP, Budennyy SA. Graph neural networks for predicting structural stability of cd- and zn-doped-cspbi3. Comput Mater Sci. 2024;232:112672. doi: 10.1016/j.commatsci.2023.112672.
Google Scholar
31
-
Jacobsson TJ, Hultqvist A, García-Fernández A, Anand A, Al-Ashouri A, Hagfeldt A, et al. An open-access database and analysis tool for perovskite solar cells based on the fair data principles. Nat Energy. 2022;7:107–15. doi: 10.1038/s41560-021-00941-3.
Google Scholar
32
-
Kipf TN, Welling M. Semi-supervised classification with graph convolutional networks. 2016. doi: 10.48550/ARXIV.1609.02907.
Google Scholar
33
-
Morris C, Ritzert M, Fey M, Hamilton WL, Lenssen JE, Rattan G, et al. Weisfeiler and leman go neural: higher-order graph neural networks. 2018. doi: 10.48550/ARXIV.1810.02244.
Google Scholar
34
-
Bresson X, Laurent T. Residual gated graph convnets. 2017. doi: 10.48550/ARXIV.1711.07553.
Google Scholar
35
-
Wu F, Zhang T, Souza AHD, Fifty C, Yu T, Weinberger KQ. Simplifying graph convolutional networks. 2019. doi: 10.48550/ARXIV.1902.07153.
Google Scholar
36
-
Brody S, Alon U, Yahav E. How attentive are graph attention networks?. 2021. doi: 10.48550/ARXIV.2105.14491.
Google Scholar
37
-
Shi Y, Huang Z, Feng S, Zhong H, Wang W, Sun Y. Masked label prediction: unified message passing model for semi-supervised classification. 2020. doi: 10.48550/ARXIV.2009.03509.
Google Scholar
38
-
Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997;9:1735–80. doi: 10.1162/neco.1997.9.8.1735.
Google Scholar
39
-
Rumelhart D, Hinton G, Williams R. Learning internal representations by error propagation. In Readings in Cognitive Science (Morgan Kaufmann). Collins A, Smith EE, Eds., 1988, pp. 399–421. doi: 10.1016/B978-1-4832-1446-7.50035-2.
Google Scholar
40
-
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, et al. Attention is all you need. In Advances in Neural Information Processing Systems, vol. 30, Guyon I, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S, et al., Eds. Curran Associates, Inc., 2017.
Google Scholar
41
-
Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, et al. Pytorch: an imperative style, high-performance deep learning library. 2019. doi: 10.48550/ARXIV.1912.01703.
Google Scholar
42
-
Fey M, Lenssen JE. Fast graph representation learning with pytorch geometric. 2019. doi: 10.48550/ARXIV.1903.02428.
Google Scholar
43
-
Schneider P, Xhafa F. Chapter 3-anomaly detection: concepts and methods. In Anomaly Detection and Complex Event Processing over IoT Data Streams. Schneider P, Xhafa F, Eds. Cambridge (MA): Academic Press, 2022, pp. 49–66. doi: 10.1016/B978-0-12-823818-9.00013-4.
Google Scholar
44
-
Li W, Hu J, Chen Z, Jiang H, Wu J, Meng X, et al. Performance prediction and optimization of perovskite solar cells based on the bayesian approach. Sol Energy. 2023;262:111853. doi: 10.1016/j.solener.2023.111853.
Google Scholar
45
-
Lu Y, Wei D, Liu W, Meng J, Huo X, Zhang Y, et al. Predicting the device performance of the perovskite solar cells from the experimental parameters through machine learning of existing experimental results. J Energy Chem. 2023;77:200–8. doi: 10.1016/j.jechem.2022.10.024.
Google Scholar
46
-
Mohanty D, Palai AK. Comprehensive machine learning pipeline for prediction of power conversion efficiency in perovskite solar cells. Adv Theory Simul. 2023;6. doi: 10.1002/adts.202300309.
Google Scholar
47
-
Fukasawa R, Asahi T, Taniguchi T. Effectiveness and limitation of the performance prediction of perovskite solar cells by process informatics. 2024. doi: 10.1039/D3YA00617D.
Google Scholar
48
-
Vubangsi M, Mubarak AS, Al-Turjman F. Enhancing predictive modeling of photovoltaic materials’ solar power conversion efficiency using explainable ai. Energy Rep. 2024;11:3824–35. doi: 10.1016/j.egyr.2024.03.035.
Google Scholar
49