Micromagnetic simulations are often used to model the magnetic properties of nanoparticles, depending on their shape and dimension as well as other parameters. Due to the significant increase in computing time for large-scale models, simulations are regularly restricted to a single magnetic nanoparticle. Applications in bit-patterned media etc., however, necessitate large clusters of nanostructures. In our recent works, the deviations of magnetic properties and magnetization reversal processes, comparing single nanoparticles and small clusters, were investigated using the micromagnetic simulation OOMMF. The studies concentrated on a special fourfold shape which has been shown before to offer four stable states at remanence, allowing for creating quaternary bit-patterned media with two bits storable in one position. The influence of downscaling was examined by varying the sample dimensions without changing the particle shape. The results show that in case of the special square nanostructures under investigation, the largest nanoparticles experience the strongest effect by being included in a cluster, while the technologically more relevant smaller nanoparticles have similar magnetic properties and identical magnetization reversal processes for single and clustered particles.
Citation: Andrea Ehrmann, Tomasz Blachowicz. Interaction between magnetic nanoparticles in clusters[J]. AIMS Materials Science, 2017, 4(2): 383-390. doi: 10.3934/matersci.2017.2.383
Related Papers:
[1]
Huangjing Yu, Yuhao Wang, Heming Jia, Laith Abualigah .
Modified prairie dog optimization algorithm for global optimization and constrained engineering problems. Mathematical Biosciences and Engineering, 2023, 20(11): 19086-19132.
doi: 10.3934/mbe.2023844
[2]
Cheng Zhu, Yong Zhang, Xuhua Pan, Qi Chen, Qingyu Fu .
Improved Harris Hawks Optimization algorithm based on quantum correction and Nelder-Mead simplex method. Mathematical Biosciences and Engineering, 2022, 19(8): 7606-7648.
doi: 10.3934/mbe.2022358
[3]
Donghui Ma, Qianqian Duan .
A hybrid-strategy-improved butterfly optimization algorithm applied to the node coverage problem of wireless sensor networks. Mathematical Biosciences and Engineering, 2022, 19(4): 3928-3952.
doi: 10.3934/mbe.2022181
[4]
Shuming Sun, Yijun Chen, Ligang Dong .
An optimization method for wireless sensor networks coverage based on genetic algorithm and reinforced whale algorithm. Mathematical Biosciences and Engineering, 2024, 21(2): 2787-2812.
doi: 10.3934/mbe.2024124
[5]
Hongmin Chen, Zhuo Wang, Di Wu, Heming Jia, Changsheng Wen, Honghua Rao, Laith Abualigah .
An improved multi-strategy beluga whale optimization for global optimization problems. Mathematical Biosciences and Engineering, 2023, 20(7): 13267-13317.
doi: 10.3934/mbe.2023592
[6]
Amit Kumar, Jehad Alzabut, Sudesh Kumari, Mamta Rani, Renu Chugh .
Dynamical properties of a novel one dimensional chaotic map. Mathematical Biosciences and Engineering, 2022, 19(3): 2489-2505.
doi: 10.3934/mbe.2022115
[7]
Qing Ye, Qiaojia Zhang, Sijie Liu, Kaiqiang Chen .
A novel chaotic system based on coupled map lattice and its application in HEVC encryption. Mathematical Biosciences and Engineering, 2021, 18(6): 9410-9429.
doi: 10.3934/mbe.2021463
[8]
Xinrui Ren, Jianbo Yu, Zhaomin Lv .
Support vector machine optimization via an improved elephant herding algorithm for motor energy efficiency rating. Mathematical Biosciences and Engineering, 2022, 19(12): 11957-11982.
doi: 10.3934/mbe.2022557
[9]
Min Liu, Guodong Ye .
A new DNA coding and hyperchaotic system based asymmetric image encryption algorithm. Mathematical Biosciences and Engineering, 2021, 18(4): 3887-3906.
doi: 10.3934/mbe.2021194
[10]
Lulu Liu, Shuaiqun Wang .
An improved immune algorithm with parallel mutation and its application. Mathematical Biosciences and Engineering, 2023, 20(7): 12211-12239.
doi: 10.3934/mbe.2023544
Abstract
Micromagnetic simulations are often used to model the magnetic properties of nanoparticles, depending on their shape and dimension as well as other parameters. Due to the significant increase in computing time for large-scale models, simulations are regularly restricted to a single magnetic nanoparticle. Applications in bit-patterned media etc., however, necessitate large clusters of nanostructures. In our recent works, the deviations of magnetic properties and magnetization reversal processes, comparing single nanoparticles and small clusters, were investigated using the micromagnetic simulation OOMMF. The studies concentrated on a special fourfold shape which has been shown before to offer four stable states at remanence, allowing for creating quaternary bit-patterned media with two bits storable in one position. The influence of downscaling was examined by varying the sample dimensions without changing the particle shape. The results show that in case of the special square nanostructures under investigation, the largest nanoparticles experience the strongest effect by being included in a cluster, while the technologically more relevant smaller nanoparticles have similar magnetic properties and identical magnetization reversal processes for single and clustered particles.
1.
Introduction
Inspired by the living existence that evolved from their ancestors lived millions of years ago, scientists and engineers have proposed a lot of nature-inspired algorithms [1], however, most of the algorithms including their improvements were not able to find all the solutions of the existed problems due to the No Free Lunch (NFL) rule [2]. And to verify the capability of algorithms, we also formulate many functions to test them, most of them nowadays become classical in literature and so we called them benchmark functions [3]. After thorough study of verification experiments, the algorithms were soon applied to solve the real-world engineering problems, for instance, prediction of routing scheme for softwarized vehicular networks [4], adaptive routing for edge software-defined vehicular networks [5], satellite-terrestrial networks storage strategy planning [6]. However, we soon confirmed that there still did not exist an efficient algorithm that could solve all of the problems we met. Therefore, new algorithms even their improvements were remaining under demand.
Literal reviews claimed that the nature-inspired algorithms could be classified into four types based on the source of the inspiration: evolutionary algorithms, swarm-based, human-based, and physics-based ones [7]. The exact number of the proposed nature-inspired algorithms might be not known causing its current prosperousness in artificial intelligence [8], however, the improvement of a given algorithm might be also difficult to tell in number. It could be relevant to all aspects of an existed algorithm regarding the improvements. For instance, the particle swarm optimization (PSO) algorithm, which was proposed in 1995 [9], became popular soon after its birth and various kinds of improvements were raised by scientists and engineers all over the world. In the traditional PSO algorithm, the individuals update their positions according to their velocities, random weighted distance between them and their best trajectories, and the global best candidates in each iteration.
In a detailed study of the improved PSO algorithms, first of all, the continuous domain definition was considered to be not able to meet the request of problems in the digital world. And consequently, the discrete binary version of the PSO algorithm was proposed [10]. The weight of the velocity was soon added and better performance was astonishingly achieved [11]. Inertia weights were then evaluated to remove the constant characteristics of this weight parameter [12]. Some improvements reconsider the involvement of the three parts and decreased them linearly to increase the convergence ratio, and therefore, the neighborhood operator improved PSO algorithm [13] was proposed.
Except for the parameters involved in the original algorithm, the participation of individuals in swarms might be another interesting hotspot. A fixed number of successful or failed operations in optimization might be introduced to guarantee convergence [14]. While in the fully informed PSO algorithm [15], all of the individuals were introduced to play roles in the updating of their positions, discarding the original concept of performance of the velocity, historical best, and global best candidate. Eliminating the historical trajectory with c1=0[16] or replacing the global best candidates with the worst ones in each iteration [17] might be other tries.
Furthermore, the velocities or positions of individuals in swarms might also be reinitialized during iterations [18]. The multiple updating disciplinary [19] might also be introduced. For instance, in the bare-boned PSO algorithm [20], half of the individuals in swarms were updated according to the global best candidates in iteration and another half of them with traditional way. The probability might be defined by the number of tunnels for individuals to follow, such as in the heterogeneous PSO [21].
Let alone the hybridization of famous algorithms [22], the randomness involved in the algorithms was another hot spot in the design of improvements. Traditional pseudo-random numbers are equally selected from the given domain, therefore, the distribution of weights was Gauss distribution, the random steps were equally chosen from the distribution. However, the population number in swarms was not expected to be large in number under the control of current computer hardware. And few samples could not follow the statistics of Gauss distribution. It might decrease the capability in exploration and exploitation [23]. Considering the Levy flight [24] could generate some long steps after several rounds of small steps, it could be used to increase the performance of algorithms in optimization. The overall better performance demonstrated in the literature proved that such kind of improvement could indeed improve the capability of the original algorithm [25]. Another common improvement was to replace the randomness in Gauss distribution with chaos. Because of the pseudo randomness generated by Gauss distribution in computer engineering, the random numbers were believed not well with smaller population size. However, the population size could not be set with larger numbers under the constraints of our real-world computer hardware. Considering the chaos are derived from non-linear dynamic systems, they would perform more chaotically in generating numbers. In fact, almost all of the existed algorithms have been improved with chaos, as shown in Table 1.
Table 1.
History of nature-inspired algorithms and their chaotic improvements.
Therefore, we focused our attention and collected most of the chaotic mappings in literature and tried to verify its better performance. The main contribution of this paper would be:
1) A collection of 19 popular chaotic mappings and all of them could generated random numbers within an interval of 0 and 1.
2) A brief review of the chaotic improvements and four types of improvements were summarized.
3) Applying in improving the well-known grey wolf optimization (GWO) and sine cosine (SC) algorithms, detailed comparison and simulation experiments were carried out.
The rest of this paper would be arranged as follows: in Section 2, all of the chaos we found which could generate chaos in [0, 1] would be listed. Literature review of the chaotic improvements would be reviewed in Section 3. Some chaotic improvements and the experiments would be shown in Section 4. Discussions would be made and conclusions would be drawn in Section 5.
2.
The chaos
The chaos is a kind of deterministic random-like method, most of them are derived from non-linear dynamic systems [39]. Chaos is also a kind of pseudo randomness, however, since they are derived from the chaotic system, most of them would be bounded, irregular and sensitive to the initial conditions [40]. Meanwhile, the values would spread all around the domain with a given route from the initial values. The chaos could be used to eliminate the uncertainty of pseudo randomness and therefore, better stability could be wanted.
To replace the pseudo randomness in computer engineering, the chaos should be in an interval between 0 and 1. Consequently, some transformation would be wanted to change the original values if the original chaos were limited in other domains except [0, 1], the following chaotic maps could all generate chaos satisfying the requirements with or without some extra functions.
2.1. CM1: Bernoulli map
Bernoulli map is a famous map to generate chaos, its formulation might be shown as follows [41]:
xn+1={1.75xn−0.5xn>01.75xn+0.5xn≤0
(1)
Where xn represents the current chaos with n−th iteration and xn+1 represents the chaos at the next iteration.
To generate pseudo randomness in the interval between 0 and 1, chaos generated by the Bernoulli map might be transformed with double and absolute functions. A brief view of the values versus five hundred iterations would be shown in Figure 1.
Note that there are five hundred of iterations for the Bernoulli map. For convincement, all of the following chaotic maps would be also iterated five hundred times for comparison.
2.2. CM2: Chebyshev map
Chaos generated by Chebyshev map could be obtained from an equation formulated as follows [32]:
xn+1=cos(ncosxn)
(2)
Chaos generated by the Chebyshev map fluctuates in the interval between -1 and 1. Therefore, to replace the pseudo randomness in computer engineering, the final chaos should be transformed with absolute function, as shown in Figure 2.
Circle map [32] is formulated with the sine function:
xn+1=[xn+b−a2πsin(2πxn)]mod(1)
(3)
Where mod (1) is the mathematical function to find the remainder number after dividing by 1. If a = 0.5 and b = 2, the Circle map would generate chaos in the interval of 0 and 1, as shown in Figure 3.
The hybrid map is a new kind of chaos generator [42], the Hybrid system is formulated as follows:
xn+1={b(1−μ1x2n−1<xn≤01−μ2xn0<xn<1
(5)
Where μ1 = 1.8, μ2 = 2.0, b = 0.85. The chaos generated by the Hybrid map falls within -1 and 1. Therefore, the absolute function should be introduced to constraint the chaos to fall in the [0, 1] domain, as shown in Figure 5.
The iterative chaotic map with infinite collapses (ICMIC) map is a one-dimensional chaotic map [43], it is relevant to the sine function and formulated as follows:
xn+1=sin(axn)
(6)
Due to the definition of the sine function, the ICMIC map also generates chaos fallen in [-1, 1], the absolute function is also introduced to transform the chaos to fall in [0, 1], seen from Figure 6.
Intermittency map is an extension of the Bernoulli shift in which the piecewise linear map representing the passive interval is replaced by the following equation [32,39]:
xn+1={ε+xn+cxmn0<xn≤dxn−d1−dd<xn<1
(7a)
Where ε is a small number and c is calculated as follows:
c=1−ε−ddm
(7b)
Intermittent map generates chaos whose values are in [0, 1], yet spread in a weird distribution, seen from Figure 7.
The iterative map is similar to the ICMIC map, as shown in the following equation:
xn+1=sin(aπxn)
(8)
The iterative map also generates chaos in [-1, 1], so it should be transformed by the absolute function. The distribution of chaos could be seen in Figure 8.
The quadratic map is decoded from shared codes [41], it is a new kind of Mandelbrot map [45] and formulated as follows:
xn+1=b−ax2n
(14)
Where a=4,b=0.5. Quadratic map generates chaos fallen in [-0.5, 0.5]. To replace the pseudo-random numbers, double and absolute functions would be introduced, the final chaos values could be seen in Figure 14.
There appeared two kinds of Tent maps in literature. One of them is similar to the well-known Logistic map and is defined by the following equation [32]:
xn+1={xn0.7xn<0.7103(1−xn)xn≥0.7
(18)
We called this Tent1 map, labeled CM18 for future applications, it could also directly be used to replace the pseudo-random numbers, as shown in Figure 18.
Another kind of Tent map is introduced by the following iterated function [44]:
xn+1={μxnxn<12μ(1−xn)xn≥12
(19)
When μ=2−ε, this Tent map could generate chaos, we called this map Tent2 map, labeled CM19 for future purpose, its values could be seen from Figure 19.
3.
A brief review of chaotic improvements in literature
The chaotic map has been introduced to improve nature-inspired algorithms for many years. Scientists and engineers all around the world were still trying to improve the existed algorithms with chaos.
3.1. Ways of chaotic applications in improvements
The existed algorithms have been tried to improve their capability in every aspect we could find.
First of all, since all of the individuals in swarms must be initialized at the beginning, the positions of individuals should be spread all over the domain of the given problem, they should be distributed as equally as possible. Every optimum, either being global or local, should be gained access by some individuals to be found as soon as possible. Therefore, the traditional pseudo-random number might also be not the best choice. This kind of effort has been made, for example, in some kind of chaotic PSO algorithm [46]. However, our early simulation experiments on some benchmark functions optimized by the chaotic Slime mould (SM) algorithm [47] proved that the chaotic initialization might decrease the capability of algorithms in optimization sometimes.
The second kind of chaotic application in improving the existed algorithms is to replace the pseudo-randomness numbers reasonably. Most of the efforts were made on this kind of improvement. Every random number falling within the interval between 0 and 1 might be replaced by chaos. Considering the number of randomnesses, every randomness could be tried to replace by every kind of chaotic map one by one, as jobs executed in the chaotic bat algorithm [32], the chaotic whale optimization (WO) algorithm [36]. Results might also be fluctuated according to the characteristics of different chaotic maps, even the positions of randomness in mathematics.
The third kind of chaotic application to improve the capability of algorithms might be to define a chaotic domain near the global best candidates in iterations. This is an efficient way to find whether the global best candidate being approaching the optimum or not. For every global best candidate in iterations, the current best position might be used to generate a random domain, several rounds of other chaotic approaches might be carried out to find whether there was another best position or not. This kind of operation increased the time complexity and meanwhile increased the convergence ratio dramatically. Most of our human attention was paid to this kind of improvement, such as the chaotic PSO algorithm [46], chaotic grey wolf optimization (GWO) algorithm [39].
A new kind of chaotic application was proposed most recently. Almost in all of the swarm-based algorithms, there always existed a controlling parameter to balance the ratio of exploration and exploitation for individuals during iterations. For example, chaotic parameter f in chimp optimization algorithm (ChOA) [48], or the control parameter a in the GWO algorithm [2]. All of these kinds of parameters were declined linearly from the maximum to minimum or zero. In 2019, a new kind of non-linear chaotic decreased way was proposed by Saxena et al. [49], the new control parameter a was defined as a combination of the traditional parameter and chaos following the β distribution. However, although simulation experiments verified its capability in optimization, the overall results were not promising as a whole.
3.2. Performance of chaotic applications in improvements
For the above four types of chaotic applications, the first kind of improvements had been already proved to be incapable frequently in literature. While the rest kinds of improvements were not deterministic, sometimes the chaotic enabled algorithms would perform better, sometimes they did not.
Due to the chaotic characteristics of chaos, chaotic mapping enabled improvement has been popular in literature for decades. Almost all of the algorithms have been tried to be improved with chaotic mapping, as shown in Table 2.
Table 2.
Algorithms and their chaotic improvements.
In Section 3, we found that there were four types of chaotic applications to improve the performance of the existed algorithms. Some of them had already proved to be incapable for most times. Yet some of them were frequently reported to perform better or worse in literature. In Section 2, we collected 19 chaotic maps and all of them could generate chaos in the interval between 0 and 1. Some of them, for example, Logistic and Piecewise maps, were frequently used in applications. Some of them seldom appeared in literature regarding nature-inspired algorithms. In this section, we would carry out some simulation experiments to verify their suitability and capability. For simplicity, the GWO [57] and sine cosine (SC) [58] algorithms were introduced to be representatives of the swarm-based nature-inspired algorithms, and the convergence rate would be the only metrics to be balanced.
4.1. Experimental setup
Simulation experiments would be carried out to verify the capability of algorithms. There are 19 chaotic maps, four types of improvements, and two famous algorithms being involved in this paper.
Due to the randomness being involved in every nature-inspired algorithm, the results would be changed every time we got. To reduce the influence of randomness, Monte Carlo methods were always introduced to average the results and verified the capability of algorithms in optimization. In this experiment, 100 Monte Carlo simulation experiments would be carried out and the final results would be their average.
For simplicity, we improved the GWO and SC algorithms with different kinds of chaotic maps. And all of the benchmark functions in Table 3−6 were introduced to carry on simulation experiments.
For the first kind of improvements, results were shown from Figures 20 to 23.
We can find that although different chaotic maps might generate different kinds of chaos when they are introduced to improve the algorithms at the initialization stage, some of the chaos might perform better, some of them not. For Ackley 1 function, almost all of the chaos introduced to improve the algorithms at the initialization stage performed better than the original. For the Alpine 2 function, all of the chaotic mapped SCA performed better than before, however, only Chebyshev, Tent1, Piecewise, Intermittency, Sine, Gauss, Logistic, ICMIC maps could result in better performance than before.
Optimizing the Chung Reynolds function, only a few of the chaotic maps perform worse than before. However, in a different way for SC algorithm (Liebovitch map performed worse) and the GWO algorithm (Logistic, Chebyshev, Kent, and Liebovitch performed worse).
Similar conclusions could be made on the Bent Cigar function. All of the chaotic maps could yield better performance for the SC algorithm, while for the GWO algorithm, the Hybrid, Chebyshev, Tent2 map would perform worse in improvement.
Based on the results above, we cannot conclude that when the chaotic maps were introduced to improve the randomness at the initialization stage, they would perform better or worse. Different chaotic maps and benchmark functions need to be specially reviewed and verified. It was really hard to predict even for an experienced person. Nevertheless, the chaos introduced at the initialization stage would improve the capability of optimization with a large probability.
4.3. Type 2: Chaotic mapped randomness
The pseudo-random numbers are always replaced by chaos to reduce the influence of traditional pseudo-randomness.
However, since there always existed more than one random number involved in the algorithms, every randomness might be involved to be replaced like the work in chaotic bat algorithm [32]. Furthermore, more chaotic maps might be also introduced to replace each random number respectively.
In this paper, only the first random number was replaced by chaos, the rest of the random numbers remained unchanged, results were also averaged from 100 Monte Carlo simulation experiments, and were shown in Figures 24−27.
Convergence ratio curves of different kinds of equations showed that there might be no strict discipline that could be followed in this kind of experiment, the improvements were random for all of the chaos and benchmark functions.
4.4. Type 3: Chaotic mapped iterations
The most popular improvements introduced were to carry on several rounds of iterations with chaos. In this kind of experiment, we might choose extra rounds of iterations as chaoticMaxIter = 10.
During the main iteration carrying out by individuals, their positions were updated, and then the best candidate was found. Extra chaoticMaxIter iterations were carried out with chaos, when a better position was achieved, then this new best candidate would be treated as the global best candidate in the current iteration. Otherwise, if chaoticMaxIter iterations were carried to the end and no better position was found, the current best candidate still took the position and led the individuals to the next iteration of swarms.
Due to the extra chaotic exploration and exploitation around the best candidate, who might be the global or local optimum, consequent better results were promised. However, the final results as shown in Figures 28−31 didn't fully support this conclusion.
For Sargan and Holzman's 2 functions, most of the chaotic maps would increase the capability of optimization, only a few of them failed to do so. However, 18/19 chaotic maps failed to increase the SC algorithm in optimizing the Griewank function, the dramatically complex landscape and optima might be the reason. And 13/19 chaotic maps failed to increase the SC algorithm in optimization of Schwefel 1.2 function, the valley shape might be a reason.
Statistically speaking, this kind of improvement could indeed increase the algorithms as a whole, but not all of the chaotic maps could increase or decrease the capability of algorithms in optimization. Confirmative conclusions could be drawn based on the first glance of the benchmark functions or chaotic maps, simulation experiments remain indeed to verify whether a given chaotic map could improve a selected algorithm or not.
4.5. Type 4: Chaotic mapped controlled parameters
The controlled parameter balancing the exploration and exploitation ratio was mostly declined from the maximum to minimum values linearly, for instance, the energy parameter E in the Harris hawk optimization (HHO) algorithm [59], a in the SC algorithm or GWO algorithm. Sometimes other distributions might be also introduced, such as the exponential function in the equilibrium optimization (EO) algorithm [60], arctanh [61], or cosine [62] functions in the slime mould (SM) algorithm. In the SCA and GWO algorithms, the controlled parameters were declined linearly from 2 to zero versus iterations with randomness. The randomness could also be replaced by chaos to improve capability, steadiness, and suitability.
This kind of improvement was just proposed recently with a specific seldom applied chaotic map in β distribution [49]. In this experiment, 19 classical chaotic maps were introduced to replace the β ddistributed chaos, results were shown from Figures 32 to 35.
Results showed that sometimes this kind of improvement could improve the capability of the optimization algorithm, but sometimes the chaotic maps would fail to do so. Regarding Schwefeil 1.2 function, all of the chaotic improved GWO algorithms would perform better than the original, while only six of them could improve the SC algorithm.
4.6. Discussion of the results
Literal researches had been made and four types of classical improvements including a newly proposed version were concluded. Detailed results of works in this paper might be summarized in Tables 7 and 8.
Table 7.
Summarize simulation experiments on the GWO algorithm.
Chaos
Unimodal
Multimodal
Unimodal with basins
Unimodal with valleys
B
F1
F2
F3
F4
F5
F6
F7
F8
F9
F10
F11
F12
F13
F14
F15
F16
CM1
W
B
B
W
W
W
W
B
B
B
B
B
B
B
B
B
11
CM2
W
W
B
W
W
W
W
W
W
B
B
W
B
W
B
W
5
CM3
W
B
B
W
B
W
W
B
W
B
B
B
B
B
B
W
10
CM4
W
B
B
W
B
B
B
B
W
B
B
B
B
B
B
W
12
CM5
W
W
B
W
B
B
B
B
W
B
B
B
B
B
B
W
11
CM6
W
B
B
B
W
W
W
W
W
B
B
B
B
W
B
W
8
CM7
W
B
W
W
W
W
W
W
W
W
B
W
B
B
B
W
5
CM8
W
B
B
W
B
B
B
B
B
B
B
B
B
B
B
W
13
CM9
W
B
B
W
B
B
B
B
B
W
B
B
B
B
B
B
13
CM10
W
B
B
W
B
B
W
B
W
W
B
B
B
B
B
W
10
CM11
W
B
B
B
W
B
N
B
B
B
B
W
B
B
B
W
11
CM12
W
B
B
W
W
B
B
W
W
B
B
B
B
B
B
W
10
CM13
W
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
15
CM14
W
B
B
W
W
W
W
W
B
W
B
W
B
W
B
W
6
CM15
W
B
B
W
W
B
B
W
W
B
B
B
B
B
B
10
CM16
W
B
B
W
B
W
W
W
W
B
B
B
B
B
B
W
9
CM17
W
W
B
W
W
W
W
W
W
W
W
N
B
W
B
W
3
CM18
W
B
B
W
B
B
B
B
W
B
B
B
B
B
B
B
13
CM19
W
B
B
W
W
B
B
B
W
B
B
W
B
B
B
W
10
B
0
16
18
3
9
10
9
12
6
13
18
13
19
15
19
5
Note: In the above table, B means better, represents the chaotic improved algorithm would perform better than the original algorithm; W means worse and the opposite meaning of B; N represents a meanness incensement, that is to say, we cannot find whether the improvement performs better or worse.
We can find that the chaotic maps could increase the capability of the GWO algorithm more than that of the SC algorithm. Most of the chaotic maps could result in a better performance when they are applied in improving the GWO algorithm, simulation experiments verified their capability. Only a few of them, precisely 6/19, such as the ICMIC, Intermittency, Quadratic, Singer, and Sinusoidal maps could only perform better in less than 10/16 benchmark functions.
On the opposite side, only Bernoulli and Kent map performed better in no less than 10 of 16 benchmark functions. Most of them could not yield better results.
Focusing on the characteristics of benchmark functions, four types including unimodal, multimodal, unimodal with basin-like or valley-like profiles in their three-dimensional profiles [3] of benchmark functions were introduced. However, we still cannot draw confirmative conclusions from the experiment results. For example, from the results of simulation experiments on unimodal benchmark functions with the GWO algorithm, we can find that none of the chaotic improved GWO algorithms could perform better in optimizing Ackley 1 function, meanwhile, 16 and 18 of the 19 chaotic maps could perform better in improvement on the Exponential or Sargan function. Regarding the SC algorithm, most of the chaotic maps failed to do so.
5.
Discussion and conclusions
In this paper, we paid attention to the chaotic maps which could generate pseudo-random numbers in an interval between 0 and 1. Nineteen classical chaotic maps were collected and their improvement on the famous GWO and SC algorithms were researched.
Four types of benchmark functions, including two main classical characteristics: modality and uniqueness in profiles, were introduced to carry on the simulation experiments. For the obviousness and simplicity, only convergence ratio experiments were carried out and the convergent curves were shown in figures. We can conclude about the study of chaotic improved research that:
A. Although the chaotic maps could be used to replace the pseudo-random number in computer engineering, they could not increase the performance of algorithms in optimization for sure.
B. Among all of the four types of chaotic improvements, sometimes they could perform better, yet none of them could perform confirmative better for a given problem.
C. Modality and the unique landscape could influence the performance of optimization algorithms, however, they are all not the only reason. A detailed study in mathematics might be needed to find the reason why some of the benchmark functions could be easily optimized, while some of them not.
D. There might be doubtful to apply the chaotic maps in improving optimization algorithms in engineering. It might be OK for a given problem and algorithm, which might occur in some subjects. Nevertheless, due to the uncertainty of a specific chaotic map on all improved ways and functions, it would be unnecessary to establish chaotic improved platforms in computer engineering.
E. It might be a little meaningless to investigate the performance of chaos in optimization henceforth.
Acknowledgments
The authors would like to thank the supports of the following projects: the scientific research team project of Jingchu University of technology with grant number TD202001, and the key research and development project of Jingmen with grant numbers 2019YFZD009.
Conflict of interest
There is no conflict of interest.
Appendix
Figure 20.
Convergence with iterations for Ackley 1 function.
Terris BD, Thomson TJ (2005) Nanofabricated and self-assembled magnetic structures as data storage media. J Phys D Appl Phys 38: R199-R222. doi: 10.1088/0022-3727/38/12/R01
[2]
Cowburn RP, Welland ME (2000) Room temperature magnetic quantum cellular automata. Science 287: 1466-1468. doi: 10.1126/science.287.5457.1466
Bader SD (2006) Colloquium: Opportunities in nanomagnetism. Rev Mod Phys 78: 1. doi: 10.1103/RevModPhys.78.1
[5]
Bowden SR, Gibson U (2009) Optical Characterization of All-Magnetic NOT Gate Operation in Vortex Rings. IEEE T Magn 45: 5326-5332. doi: 10.1109/TMAG.2009.2026573
[6]
Richter H, Dobin A, Heinonen O, et al. (2006) Recording on bit-patterned media at densities of 1Tb/in2 and beyond. IEEE T Magn 42: 2255-2260. doi: 10.1109/TMAG.2006.878392
[7]
Cowburn RP, Koltsov DK, Adeyeye AO, et al. (1999) Single-Domain Circular Nanomagnets. Phys Rev Lett 83: 1042. doi: 10.1103/PhysRevLett.83.1042
[8]
Zhang W, Haas S (2010) Phase diagram of magnetization reversal processes in nanorings. Phys Rev B 81: 064433. doi: 10.1103/PhysRevB.81.064433
[9]
He K, Smith DJ, McCartney MR (2010) Effects of vortex chirality and shape anisotropy on magnetization reversal of Co nanorings. J Appl Phys 107: 09D307.
[10]
Wang RH, Jiang JS, Hu M (2009) Metallic cobalt microcrystals with flowerlike architectures: Synthesis, growth mechanism and magnetic properties. Mater Res Bull 44: 1468-1473. doi: 10.1016/j.materresbull.2009.02.016
[11]
Huang L, Schofield MA, Zhu Y (2010) Control of Double-Vortex Domain Configurations in a Shape-Engineered Trilayer Nanomagnet System. Adv Mater 22: 492-495. doi: 10.1002/adma.200902488
[12]
Thevenard L, Zeng HT, Petit D, et al. (2010) Macrospin limit and configurational anisotropy in nanoscale permalloy triangles. J Magn Magn Mater 322: 2152-2156. doi: 10.1016/j.jmmm.2010.01.048
[13]
Moritz J, Vinai G, Auffret S, et al. (2011) Two-bit-per-dot patterned media combining in-plane and perpendicular-to-plane magnetized thin films. J Appl Phys 109: 083902. doi: 10.1063/1.3572259
[14]
Blachowicz T, Ehrmann A, Steblinski P, et al. (2013) Directional-dependent coercivities and magnetization reversal mechanisms in fourfold ferromagnetic systems of varying sizes. J Appl Phys 113: 013901.
[15]
Blachowicz T, Ehrmann A (2013) Six-state, three-level, six-fold ferromagnetic wire system. J Magn Magn Mater 331: 21-23.
[16]
Blachowicz T, Ehrmann A (2013) Micromagnetic Simulations of Anisotropies in Coupled and Uncoupled Ferromagnetic Nanowire Systems. Sci World J 2013: 472597.
[17]
Blachowicz T, Ehrmann A (2011) Fourfold nanosystems for quaternary storage devices. J Appl Phys 110: 073911.
[18]
Blachowicz T, Ehrmann A (2015) Magnetization reversal modes in fourfold Co nan-wire systems. J Phys: Conf Ser 633: 012100. doi: 10.1088/1742-6596/633/1/012100
[19]
Blachowicz T, Ehrmann A (2016) Stability of magnetic nano-structures with respect to shape modifications. J Phys: Conf Ser 738: 012058. doi: 10.1088/1742-6596/738/1/012058
[20]
Ma CT, Li X, Poon SJ (2016) Micromagnetic simulation of ferrimagnetic TbFeCo films with exchange coupled nanophases. J Magn Magn Mater 417: 197-202.
[21]
Tillmanns A, Oertker S, Beschoten B, et al. (2006) Magneto-optical study of magnetization reversal asymmetry in exchange bias. Appl Phys Lett 89: 202512.
[22]
Donahue MJ, Porter DG (1999) OOMMF User's Guide, Version 1.0. Interagency Report NISTIR 6376, National Institute of Standards and Technology, Gaithersburg, MD.
[23]
Gilbert TL (2004) A phenomenological theory of damping in ferromagnetic materials. IEEE T Magn 40: 3443-3449. doi: 10.1109/TMAG.2004.836740
[24]
Smith N, Markham D, LaTourette J (1989) Magnetoresistive measurement of the exchange constant in varied-thickness permalloy films. J Appl Phys 65: 4362. doi: 10.1063/1.343273
[25]
Kneller EF, Hawig R (1991) The exchange-spring magnet: A new material principle for permanent magnets. IEEE T Magn 27: 3588-3600. doi: 10.1109/20.102931
[26]
Michea S, Briones J, Palma JL, et al. (2014) Magnetization reversal mechanism in patterned (square to wave-like) Py antidot lattices. J Phys D Appl Phys 47: 335001. doi: 10.1088/0022-3727/47/33/335001
[27]
Ehrmann A, Blachowicz T, Komraus S, et al. (2015) Magnetic properties of square Py nanowires: Irradiation dose and geometry dependence. J Appl Phys 117: 173903. doi: 10.1063/1.4919839
[28]
Ehrmann A, Komraus S, Blachowicz T, et al. (2016) Pseudo exchange bias due to rotational anisotropy. J Magn Magn Mater 412: 7-10.
[29]
Blachowicz T, Ehrmann A (2016) Square nano-magnets as bit-patterned media with doubled possible data density. Mater Today: Proceedings, submitted for publication.
Andrea Ehrmann, Tomasz Blachowicz. Interaction between magnetic nanoparticles in clusters[J]. AIMS Materials Science, 2017, 4(2): 383-390. doi: 10.3934/matersci.2017.2.383
Andrea Ehrmann, Tomasz Blachowicz. Interaction between magnetic nanoparticles in clusters[J]. AIMS Materials Science, 2017, 4(2): 383-390. doi: 10.3934/matersci.2017.2.383
Table 7.
Summarize simulation experiments on the GWO algorithm.
Chaos
Unimodal
Multimodal
Unimodal with basins
Unimodal with valleys
B
F1
F2
F3
F4
F5
F6
F7
F8
F9
F10
F11
F12
F13
F14
F15
F16
CM1
W
B
B
W
W
W
W
B
B
B
B
B
B
B
B
B
11
CM2
W
W
B
W
W
W
W
W
W
B
B
W
B
W
B
W
5
CM3
W
B
B
W
B
W
W
B
W
B
B
B
B
B
B
W
10
CM4
W
B
B
W
B
B
B
B
W
B
B
B
B
B
B
W
12
CM5
W
W
B
W
B
B
B
B
W
B
B
B
B
B
B
W
11
CM6
W
B
B
B
W
W
W
W
W
B
B
B
B
W
B
W
8
CM7
W
B
W
W
W
W
W
W
W
W
B
W
B
B
B
W
5
CM8
W
B
B
W
B
B
B
B
B
B
B
B
B
B
B
W
13
CM9
W
B
B
W
B
B
B
B
B
W
B
B
B
B
B
B
13
CM10
W
B
B
W
B
B
W
B
W
W
B
B
B
B
B
W
10
CM11
W
B
B
B
W
B
N
B
B
B
B
W
B
B
B
W
11
CM12
W
B
B
W
W
B
B
W
W
B
B
B
B
B
B
W
10
CM13
W
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
15
CM14
W
B
B
W
W
W
W
W
B
W
B
W
B
W
B
W
6
CM15
W
B
B
W
W
B
B
W
W
B
B
B
B
B
B
10
CM16
W
B
B
W
B
W
W
W
W
B
B
B
B
B
B
W
9
CM17
W
W
B
W
W
W
W
W
W
W
W
N
B
W
B
W
3
CM18
W
B
B
W
B
B
B
B
W
B
B
B
B
B
B
B
13
CM19
W
B
B
W
W
B
B
B
W
B
B
W
B
B
B
W
10
B
0
16
18
3
9
10
9
12
6
13
18
13
19
15
19
5
Note: In the above table, B means better, represents the chaotic improved algorithm would perform better than the original algorithm; W means worse and the opposite meaning of B; N represents a meanness incensement, that is to say, we cannot find whether the improvement performs better or worse.
Note: In the above table, B means better, represents the chaotic improved algorithm would perform better than the original algorithm; W means worse and the opposite meaning of B; N represents a meanness incensement, that is to say, we cannot find whether the improvement performs better or worse.
Chaos
Unimodal
Multimodal
Unimodal with basins
Unimodal with valleys
B
F1
F2
F3
F4
F5
F6
F7
F8
F9
F10
F11
F12
F13
F14
F15
F16
CM1
W
B
B
W
W
B
W
B
B
W
B
B
B
W
B
B
10
CM2
W
W
W
W
B
W
W
W
B
W
B
W
W
W
B
B
5
CM3
W
B
W
W
B
B
W
W
B
W
B
W
W
W
W
W
5
CM4
W
B
W
W
B
W
W
W
B
W
B
W
W
B
W
B
6
CM5
B
B
W
W
B
B
W
W
B
W
B
W
W
W
B
W
7
CM6
B
W
W
W
B
B
W
W
B
W
B
W
W
W
W
B
6
CM7
W
W
W
W
B
W
W
W
B
W
B
W
W
B
W
W
4
CM8
W
W
W
W
B
W
W
W
B
W
B
W
W
B
W
B
5
CM9
W
W
W
W
W
W
W
W
B
W
B
W
W
W
W
W
2
CM10
W
W
W
W
B
W
W
W
B
W
B
W
W
W
W
W
3
CM11
W
B
W
B
B
B
W
W
B
W
B
W
B
W
W
B
8
CM12
W
B
W
W
B
B
W
W
B
W
B
W
W
W
W
B
6
CM13
W
W
B
B
B
W
W
W
B
W
B
B
W
W
W
W
6
CM14
W
B
W
B
B
W
W
B
B
W
B
B
B
B
B
B
11
CM15
W
W
W
W
B
W
W
W
B
W
B
W
B
W
B
W
5
CM16
W
W
W
W
B
W
B
W
B
W
B
W
N
W
W
W
4
CM17
W
W
W
W
B
W
W
W
B
W
B
W
W
W
W
W
3
CM18
W
W
B
B
B
B
W
W
B
W
W
B
W
B
B
B
9
CM19
W
B
W
B
B
B
W
W
B
W
B
W
W
B
W
B
8
B
2
8
3
5
17
8
1
2
19
0
18
4
4
6
6
10
Figure 1. Sample under investigation, used as single particle (left panel) or 4 × 4 cluster (right panel, scaled)
Figure 2. Simulation of single particle with 400 nm side length and 4 × 4 cluster as depicted in Figure 1
Figure 3. Simulation of single particle with 200 nm side length and 4 × 4 cluster as depicted in Figure 1
Figure 4. Simulation of single particle with 120 nm side length and 4 × 4 cluster as depicted in Figure 1
Figure 5. Comparison of hysteresis loops, simulated for a single particle with 120 nm side length and 4 × 4 cluster at 10° angular orientation
Figure 6. Snapshots of the four possible magnetic states at vanishing external magnetic field, simulated for a cluster of particles with 120 nm side lengths at 10° angular orientation