
To understand the influence of the Allee effect and intraspecific cooperation on the dynamics of a predator-prey system, we constructed a model using ordinary differential equations. Our research shows that the system exhibits more complex dynamics, including possible bistability between alternative semi-trivial states and an Allee effect for prey. The Allee effect can destabilize the system. The equilibrium points of the system could change from stable to unstable. Otherwise, even if the system were stable, it would take much longer time to reach a stable state. We also find that the presence of the Allee effect of prey increases the positive equilibrium density of the predator but has no effect on the positive equilibrium density of the prey. It should be noted that the influence of nonlinear predator mortality also causes the system to take a longer time to reach a steady state.
Citation: Yalong Xue. Analysis of a prey-predator system incorporating the additive Allee effect and intraspecific cooperation[J]. AIMS Mathematics, 2024, 9(1): 1273-1290. doi: 10.3934/math.2024063
[1] | Yanlin Li, Nasser Bin Turki, Sharief Deshmukh, Olga Belova . Euclidean hypersurfaces isometric to spheres. AIMS Mathematics, 2024, 9(10): 28306-28319. doi: 10.3934/math.20241373 |
[2] | Dan Yang, Jinchao Yu, Jingjing Zhang, Xiaoying Zhu . A class of hypersurfaces in En+1s satisfying Δ→H=λ→H. AIMS Mathematics, 2022, 7(1): 39-53. doi: 10.3934/math.2022003 |
[3] | Wei Xu, Jingjing Liu, Jinman Li, Hua Wang, Qingtai Xiao . A novel hybrid intelligent model for molten iron temperature forecasting based on machine learning. AIMS Mathematics, 2024, 9(1): 1227-1247. doi: 10.3934/math.2024061 |
[4] | Hanan Alohali, Sharief Deshmukh . Some generic hypersurfaces in a Euclidean space. AIMS Mathematics, 2024, 9(6): 15008-15023. doi: 10.3934/math.2024727 |
[5] | George Rosensteel . Differential geometry of collective models. AIMS Mathematics, 2019, 4(2): 215-230. doi: 10.3934/math.2019.2.215 |
[6] | Daniel Gerth, Bernd Hofmann . Oversmoothing regularization with ℓ1-penalty term. AIMS Mathematics, 2019, 4(4): 1223-1247. doi: 10.3934/math.2019.4.1223 |
[7] | Tingting Xu, Zaiyong Feng, Tianyang He, Xiaona Fan . Sharp estimates for the p-adic m-linear n-dimensional Hardy and Hilbert operators on p-adic weighted Morrey space. AIMS Mathematics, 2025, 10(6): 14012-14031. doi: 10.3934/math.2025630 |
[8] | Melek Erdoğdu, Ayșe Yavuz . On differential analysis of spacelike flows on normal congruence of surfaces. AIMS Mathematics, 2022, 7(8): 13664-13680. doi: 10.3934/math.2022753 |
[9] | Sharief Deshmukh, Mohammed Guediri . Characterizations of Euclidean spheres. AIMS Mathematics, 2021, 6(7): 7733-7740. doi: 10.3934/math.2021449 |
[10] | Yong-Ki Ma, N. Valliammal, K. Jothimani, V. Vijayakumar . Solvability and controllability of second-order non-autonomous impulsive neutral evolution hemivariational inequalities. AIMS Mathematics, 2024, 9(10): 26462-26482. doi: 10.3934/math.20241288 |
To understand the influence of the Allee effect and intraspecific cooperation on the dynamics of a predator-prey system, we constructed a model using ordinary differential equations. Our research shows that the system exhibits more complex dynamics, including possible bistability between alternative semi-trivial states and an Allee effect for prey. The Allee effect can destabilize the system. The equilibrium points of the system could change from stable to unstable. Otherwise, even if the system were stable, it would take much longer time to reach a stable state. We also find that the presence of the Allee effect of prey increases the positive equilibrium density of the predator but has no effect on the positive equilibrium density of the prey. It should be noted that the influence of nonlinear predator mortality also causes the system to take a longer time to reach a steady state.
Hyperspectral images (HSI) can be derived from ordinary RGB cameras [1,2,3,4,5,6]. However, existing methods require detailed optical properties of the underlying camera, which can only be measured in a laboratory with expensive equipment, preventing ordinary users from using a plug-and-play webcam immediately.
This study proposes a calibration-free and illumination-independent method, as shown in Figure 1, to facilitate ordinary users' instant use of an arbitrary camera. Our algorithm can transform an ordinary webcam into an expensive HSI camera without the help of additional hardware. Mathematically, the forward transformation, mapping from high-dimensional HSI images to low-dimensional RGB images, is relatively easy, compared to the reverse one, mapping from low- to high-dimensional.
However, both the forward and reverse transformations involve a device-dependent response function for the underlying camera. In previous methods, the camera and light source must remain the same, and the spectral response function (SRF) of a camera and the current illuminating function (CIF) of the ambiance must be identified separately by specific equipment, such as standardized light sources, color-checkers, and spectrometers. The images used in the training database must match the same camera and light source. Such limitations prevent the algorithm from being plug-and-play.
Hyperspectral technology has been widely used in different fields [7]. Although HSI imaging possess tremendous advantages in wide fields of applications, the extremely high acquisition cost (USfanxiexian_myfh40 thousand or above) limits the suitable applications and usage scenarios. Existing computation methods converting RGB to HSI can significantly broaden the applicability of HSI.
In this study, a semi-finished model, independent of devices and illumination, is shipped with the installation package, as shown in the Model part of Figure 1. Before use, a setup step is performed without additional hardware to extract the SRF of the underlying camera, as shown in step 1 in Figure 1. During each usage, the ambient CIF must be estimated for each taken image, as shown in step 2 in Figure 1. The final reconstruction with SRF and CIF then projects RGB images to the device and illumination-independent HSI.
To accumulate sparse projection kernels of hyperspectral signatures in terms of many hyperspectral priors, we render the camera SRF through a most probable algorithm. As shown in Figure 2, at the setup step, two parametric tasks are accomplished automatically without the involvement of users. SRF and CIF are estimated by uploading random images taken by the target camera to the cloud. There is no need to involve a calibration step for ordinary users. We develop a triangular generative adversarial assimilation comprising two different methods in supervising and unsupervised learning, each with advantages, pulling each other up to make a high-precision reconstruction.
Despite the extensive studies in reconstructing HSI from RGB images, numerous challenges remain unsolved. This study seeks to close three research gaps. Existing studies require a laboratory-calibrated camera, and the training is only bound to that specific camera. Our results contribute to the online retrieval of SRF and the SRF independent training offline.
Much research has contributed to reverse mapping from the low-dimensional RGB space to the high-dimensional hyperspectral space [1,4,6,8,9]. However, gaps still exist.
The essence of the forward problem is a spatial convolution between incident image spectral and camera optics [10]. The properties of camera spectral response functions have been studied extensively [11]. The reverse mapping in algebraic or iterative approximation methods is sufficient as long as the camera response function can be correctly acquired before applying the algorithms [4,6]. The reverse mapping in kernel representation is prevalent and stable [1,8,9]. Statistical inference methods have been reported effective in enhancing the accuracy of the reverse mapping [2,3].
With the assistance of additional hardware, the recovered image can be more accurate [5,12,13,14]. However, such assistance does not fit the goal of this study. Therefore, the last piece of the puzzle for the HSI reconstruction is still acquiring camera response functions without the help of expensive instruments.
The second research gap leads to a challenge in algorithm design. Because the central part of the forward problem involves the convolution in the camera SRF, the training algorithm may only retrieve the function mapping if the SRF is determined at the training phase. On the other hand, estimating the SRF is difficult if we do not have the pairs of images with input RGB and output HSI [15]. Our problem is to retrieve the HSI without training pairs and a system model. The original generative adversarial network (GAN) requires a pair of images (x,y) for discriminator D and generator G to evaluate minGmaxDV(D,G)=Ex≈pdata(x)[logD(x|y)]+Ez≈pz(z)[log1−D(G(z|y))]. Thanks to modern progress of the maximum likelihood theorems and the sparse variation of the GAN, "double unknown retrieval" of those above has become possible [16,17,18]. One of the advantages of cycleGAN over standard GANs is that the training data need not be paired. This implies that, in our situation, only the images (RGB, HSI) are sufficient for estimating the output SRFs. The idea of conditional and cyclic adversarial networks with image-to-image translation has successfully solved the problem of double unknown retrieval [19].
When the training data are not completely paired, the latent space reconstruction becomes the key part of the double unknown retrieval [20,21,22]. Convolution particle filters have been proven effective in estimating unknown parameters in state-space [23].
The generative network synthesizes the SRFs based on a predictive distribution conditioned on the correctness of the recovery of the original RGB values [19,24]. When the projection to the latent space is not linear, some studies apply advanced algorithms to address the nonlinearity issue [25].
The third gap in the challenges of HSI reconstruction is the interference of versatile illumination conditions. Many studies tried to decouple the influence of illuminations. Despite the simplicity, an accurate spectral reflectance reconstruction must come with a spectral estimation of illumination [13]. Some solve the issue at the tristimulus value space (RGB or CIE XYZ) by collecting images of multi-conditional illumination [26,27,28]. On the other hand, CIE XYZ starts from the spectral response of illumination; and, nevertheless, they primarily focus on the color appearance for our bare eyes, not the spectral reflectance of objects. Therefore, a research gap exists in decoupling the influence of unknown illumination for spectral response estimation algorithms.
The sparse assimilation algorithm can reconstruct the parameters from limited observations [29]. However, most iterative algorithms are computationally intensive. Thus, they are not suitable for applying online and a tractable algorithm is needed for computational feasibility. We use hierarchical Bayesian learning and Metropolis-Hastings algorithms in the algorithm [30,31] to estimate joint probability densities. Therefore, this study exploits an assimilation method adapted to online computation.
As shown in Figure 3, a camera with response function SRF Rϕ(λ,i) takes a spectral reflectance input h(j,λ) for wavelength λ∈[1,N] and produces the tristimulus values c(j,i) for i∈[RGB]or[1,3], and the j∈[1,n] is the j-th dataset pair in the database. The 24-patches' color-checker pixels have been arranged into a column vector for ease of dimension expression.
The forward problem is stated as a dichromatic reflection model (DRM)
h(λ)=E(λ)ρ(λ),∀λ,c(j,i)=∑λh(j,λ)R(λ,i),∀j,i, | (3.1) |
or,
Cn×3=Hn×NRN×3,for the j-th image in the database. | (3.2) |
where the matrix form is a collection of the indexes such that Cn×3={c(j,i)}j∈[1,n],i∈[1,3].
By the representation theorem, we can find coefficients ˜w(k,k′)k,k′∈[1,m]=˜W for taking m basis such that the RGB images C can be approximated by
Cn×3≈˜Wn×mΨm×NRϕN×3. | (3.3) |
The DRM (3.2) becomes
˜WΨRϕ=HRϕ | (3.4) |
Therefore,
H=˜WΨ | (3.5) |
As discussed in the previous section, many methods exist to estimate the ill-posed back projection, but challenges still exist. Our problem is an inverse one. Given the tristimulus values C, we want to reconstruct the H. Because the dimension of H is much higher than that of C, the inverse project is ill-posed. Our method requires no pre-built calibration and prevents over-fitting after training.
The CIF is estimated from the images of the white block of the color checker. The illumination light e is converted to electric signals through the camera response eϕ, and, therefore, e(λ)=∑ieϕ(λ,i). Utilizing the above-mentioned method, the CIF can be decomposed by a set of kernel functions. We have
e(λ)=∑ieϕ(λ,i)=∑i,kγi,kbk(λ,i),∀λ∈[1,N], | (3.6) |
where bk(λ) is the k-th basis for the CIF, and γi,k is the coefficient.
Classical generative adversarial network applications use neural networks entirely for classification and regression applications. Such modeling suffers from the convergence problem. In our method, we use optical models as generators and neural networks as discriminators to maximize the knowledge of prior model information for the stochastic process of Lambert reflectance (Eq 3.2), which is implemented through a Monte Carlo radiative simulation. Through collected priors, the discriminator can distinguish between successful or unsuccessful generations. Therefore, the cyclic iteration can converge fast and accurately estimate the latent sparse space with sparse converging metrics.
We hybridize a statistical generator and neural network discriminator to maximize the usage of prior model information for the stochastic process. Our generator can also take environment conditions as covariates in the random process and is robust to anti-symmetric station distribution.
A standard GAN comprises a generator G and a discriminator D [32]. The model G and D can be neural networks or any mathematical functions, as long as the optimization (3.7) with Lagrangian η has solutions.
G∗=argminGmaxDLGAN(G,D)+ηLl1(G). | (3.7) |
The standard loss functions have the form
LGAN(G,D)=EZ,Z′[logD(Z,Z′)]+EZ,Z1[log(1−D(Z,G(Z,Z1)))], | (3.8) |
Ll1(G)=EZ,Z′,Z1[‖Z′−G(Z,Z1)‖1]. | (3.9) |
The GAN used in this study is a type of cycleGAN, which contains 2 generators and 2 discriminators, as shown in Figure 4. Our cycleGAN takes two sets of images as input and obtains the output containing the corresponding SRF. The observation Z contains the pair C3=(R,G,B) and H33 33-band HSI images. The subscripts 3 and 33 represent the number of bands for the variables, and they can be ignored if the meaning is clear. The latent estimation represents the SRF Z′=(R′33,H′33,C′3). In the target domain, the Z′ is compared to a small set of collected SRFs Z1=R133, which are only served as shape templates. The re-estimated image pairs are Z″=(C″3,H″33), where C″3=H′33R′33, H′33 is the coefficients of GZZ′, and H″33 is the direct output of GZ′Z.
The models in the cycleGAN are GZZ′, DZ′Z1, GZ′Z, and DZZ″, respectively. The cycleGAN model takes C3 and H33 as input dataset. We first prepare a set of (RGB, HSI) pairs for different color patches and a known camera (e.g., CIE 1964). Because we expected the same color patch to produce the same HSI image through different cameras, we prepared additional image pairs by taking the image of the target unknown camera as RGB and the HSI of the known camera as the HSI. We also need small samples of normal SRFs Z1=R133 as a target template to prevent multiple solutions that are dissimilar to normal SRFs.
The generator GZZ′ is a reverse model, which takes Z=(H,C) in the source domain and outputs Z′=(R′,H′,C′) in the target domain. During the generation, GZZ′ synthesizes SRFs Z′=R′, which is equivalent to applying a perturbation Δ such that R′=RΔ without violating the modality (Eq 3.10) and positiveness (Eq 3.11) constraints.
{Ri(λk+1)>Ri(λk),k=1,…,mi−1,Ri(λk+1)≤Ri(λk),k=mi,…,33, | (3.10) |
Ri(λk)≥0,k=1,…,33, | (3.11) |
where i={1,2,3} is the RGB channel, mi is a predefined mode number (peak location) for channel i, and λk is the wavelength at each band. To integrate the constraints (3.10) to the loss function of the first discriminator D(Z′,Z1), the constraints can be written in a form of violation score.
ζd=∑i=1,2,3∑k=1,…,33sgn(k−mi){Ri(λk+1)−Ri(λk)}−Ri(λk), | (3.12) |
where sgn(s)={1,−1} if s>0 and s≤0, respectively.
The synthesized SRFs Z′ will be rejected by DZ′Z1 if they are dissimilar to normal SRFs Z1=R133. The discriminator DZ′Z1 is directly formed by a residual neural network (ResNet34). The network adds some jump connections that skip internal layers to avoid the vanishing gradient problem and the accuracy saturation problem. The construction of ResNet repeats a fixed pattern several times in which the strided convolution downsampler jumps, bypassing every 2 convolutions.
The GZ′Z is the forward model, which generates the re-estimated image pairs Z″=(C″3,H″33). The C″3 is directly applied to the estimated SRF R′ from the high-dimensional H′33. The re-estimated RGB images are computed as C″3=H′33R′33, where H′33 is the one of the output of GZZ′. The re-estimated HIS images H″33 are the output of GZ′Z taking input from C′3, another output of GZZ′.
Finally, a ResNet34 DZZ″, which takes the original images Z and the re-estimated images Z″, rejects those having large errors. The discriminator DZ′Z1 uses probability-based loss function, and DZZ″ uses a simple loss function in the mean squared error (mse) between the expected and predicted outputs.
The loss functions for the two discriminators are expressed in scores
D(Z′,Z1)=(pd−1)2+p2d+η1(R′−R1)2R′+η2ζd, | (3.13) |
D(Z,Z′′)=(C−C′′)2C+(H−H′′)2H, | (3.14) |
where pd is the probability output of the discriminator (Z′,Z1), and η1 and η2 are the scaling factors controlling the involvement of the similarity and constraints, respectively. When the cycleGAN converges, the error scores (3.13) and (3.14) between Z′ and Z1, Z and Z″, respectively, will be minimized (The convergence trend of the scores can be referred in Figure 9).
To increase the accuracy of back projection, we use a kernel method to decompose the HSI and tristimulus RGB images. The estimated projection over the kernels can further minimize the reconstruction errors [33]. We decompose the HSI in the DRM (3.2) into a set of kernels ψ(λ,λ′)λ=1,⋯,N,λ′=1,⋯,N,
h(x,j,λ)=∑λ′˜H(x,j,λ)ψ(λ,λ′),∀x=[1,n],j=[1,m],orH(j)n×N=˜H(j)n×NΨN×N. | (3.15) |
At the training stage, images are indexed by j∈[1,m]. We aim to find a set of kernels ˜H(j)={˜h(x,j,λ)}x∈[1,n],λ∈[1,N] that can span the tristimulus space.
˜H can be estimated by Yule-Walker equation [34] or a Nadaraya-Watson kernel estimator [35]. The kernels Ψ are chosen in the reproducing kernel Hilbert space, guaranteeing that the basis vectors exist [33].
By the representation theorem, the tristimulus values can be approximated by the transformed kernels ∑λ˜h(x,j,λ)R(λ,i) and the coefficients ˜w(x,x′)x,x′∈[1,nm]=˜W such that
C(x,j,i)≈∑λ˜w(x,x)˜h(x,j,λ)R(λ,i),forx∈[1,n],j∈[1,m],i∈[1,3]. | (3.16) |
We decompose an RGB image by approximating
C(j)n×3≈˜W(j)n×NΨN×NRN×3. | (3.17) |
Together with the DRM (3.2), this implies that
H(j)n×N≈˜W(j)n×NΨN×N. | (3.18) |
Because the back projection ˜W may not align with the direction of the covariance of Ψ(j)R, we need to make additional assumptions in l1 space. Assuming the tristimulus space is far smaller than one of the HSI space, the sparse partial least square regression partitions the space into two disjoint subspaces such that H=(H1,H2) spanned by relevant (H1) and irrelevant variables (H2) [36,37]. Such partition effectively isolates uncorrelated basis in the latent space.
We aim to find a set of coefficients that maximize the span of hyperspectral space and also try to avoid over-fitting. The goal is that
maxμmin˜W||C−˜WΨRϕ||2+μ{1−α2||˜W||22+α||˜W||1}. | (3.19) |
The challenge of the ill-posed problem is to invert a near-singular problem. The regularization methods in (3.19) with a positive perturbation in l1 can suppress over-fitting effectively.
To avoid over-fitting, we further generalize the problem with an elastic net (3.19) in the Least Absolute Shrinkage and Selection Operator (Lasso) term [38] with Lasso penalty α with l1 and l2 norms, ||⋅||1, ||⋅||2.
Optimization (3.19) pushes the coefficients to zero if the covariates are insignificant due to the l1 properties. The reconstruction efficiency increases when transformation manifests the sparsity properties [39]. The optimization (3.19) is regulated by the Lasso penalty (α=1) or the ridge penalty (α=0), and it takes advantage of the sparse l1 norm in evaluating solutions for the ill-posed problem [40].
If α=0, the optimization in (3.19) reduces to an ordinary generalized matrix inverse, which serves as a comparison basis. The least-square estimation in the objective creates a significant variance when covariates exhibit multicollinearity.
Ridge regression performs optimization to compensate for the multicollinearity problem by finding a balance between variance and bias [41]. The ridge penalty effectively reduces the variance of the identified coefficients [42,43]. The experiments demonstrate that the lasso and ridge estimation effectively reconstruct the HSI without over-fit.
Due to the properties of the l1 space, an optimization (e.g., minx||x||1,s.t.(3.19)) should possess minimal non-zero solutions, and yield strong reconstruction performance if several sparsity properties, such as restricted isometry and incoherence properties, are satisfied [39,40,44]. In our multivariate transformation matrix Ψi, the lasso penalty tends to reduce the coefficients of less important covariates to zero, thus generating more zero solutions and fitting our assumption about the hyperspectral space.
The ill-posed back projection from C to H still contains errors. We propose a machine learning model to further close the gap. In the training step, we aim to solve a model g such that the error
||Hj−g(~Wj)Ψ||2 | (3.20) |
is minimized. This study employs an ensemble of regression learners, including random forest regression, and supports vector regression.
At the query stage, we are ready to retrieve the HSI from the decomposition matrix ˜W of the RGB images Cq.
Hq=g(˜WΨ) | (3.21) |
Our method is robust to a wide range of SRFs. An experiment is designed to prove our algorithm. The reconstructed HSIs by our method are almost identical between the measured SRF and the estimated SRF. Despite the quality of the SRF, the reconstructed HSIs are always of high quality.
We first apply our algorithm in the standard CIE 1964 camera spectral response function (Figure 5 (a)). Our algorithm uses 33 bands from 400 nm to 720 nm. Based on the normal visible spectrum range, the 3-color images have been transformed into 33-color images. The reconstructed hyperspectral images match the original ground truth HSI closely (Figure 5 (b)).
To evaluate the effectiveness of our method, we designed the experiments to compare the results of standard laboratory camera calibration and ambient light conditions. We bought a low-cost camera (under USfanxiexian_myfh30) from the Internet and set-up for laboratory calibration. The environment should mimic the one that consumer users have. A Macbeth color-checker with 24 patches, in Figure 6 (c), was used. The spectral response (λ) of light from these patches was measured by a spectrometer (PhotoResearchTM PR-670). The first two subgraphs (a) and (b) in Figure 6 show the light source and reflectance spectrum of patches and the blue patch of the color checker. As shown in the subgraph (d) in Figure 6, we fit the coefficients of the standard basis to estimate the actual SRF. With the measured SRF, we reconstruct a high-fidelity HSI in Figure 7. The error maps show small errors between the ground truth and reconstructed images. The quantitative errors are articulated in Table 1.
method | SRF | rmse | rrmse |
ours | CIE | 0.029766 | 0.069987 |
Measured | 0.037874 | 0.098941 | |
Generated | 0.038611 | 0.12115 | |
[1] | CIE | 0.052838 | 0.1111 |
Measured | 0.04472 | 0.15981 | |
Generated | 0.08846 | 0.16791 | |
rmse=root mean squared error; rrmse=relative root mean squared error. |
To achieve the goal of being calibration-free, autonomous generating of SRF should be implemented. Without any help of additional hardware, we iteratively generated an SRF by the cycleGAN (Figure 8). Through samples in the iteration in Figure 9, the cycleGAN converges promptly. Despite the slight non-smoothness in the spectral response in R-G-B bands, the reconstruction accuracy is still high. The error maps in Figure 10 still exhibit tiny errors.
As shown in Table 1, the generated SRFs by cycleGAN are accurate. The rmse (root mean squared error) and rrmse (relative root mean squared error) were 0.038 and 0.12, respectively. The convergence is relatively straightforward because the dimension in the latent space is the same as the HSI bands. The estimated SRFs are effective according to the low rmse, both by our kernel projection method or the dictionary learning method in [1].
To further visualize the performance of our reconstruction and generation algorithms, we show the original and the synthesized picture from the recovered HSI side-by-side in Figure 11. It is evident that the two pictures are almost identical, which implies that the reconstructed HSI is sufficiently accurate.
The proposed method is superior to existing methods because it does not require laboratory measurement of SRF for a new camera. The experiments demonstrate that our automatic estimated SRFs are almost identical to the laboratory measurement.
Our generated SRFs are accurate and effective. The reconstructed HSIs with generated SRFs have low rmse, both by our kernel projection method and the existing method. We compared our result to an existing method [1] (Figure 12 and the second row of Table 1).
Our learning process, however, has errors. The reconstructed HSIs from a 30-dollar webcam will not be as accurate as a 40-thousand-dollar camera. In Figure 10, the comparison of RGB shows a large 3% mse error. The maximal errors happened when the images appeared to be reflected by highlights. The highlighted parts are sensitive to unknown artifacts and cause the model confusion in the reconstruction process.
Fortunately, the target application scenarios mainly need cheap solutions and do not care much about accuracy. Moreover, noises, such as modeling errors and light source disturbance, influence the accuracy of SRF identification and HSI reconstruction.
This research contributes to deep reverse analytics by integrating an automatic calibration procedure. This paper offers two contributions targeting HSI reconstruction. The estimated SRFs and CIFs match the results measured by the standard laboratory method, and the estimated HSIs achieve less than 3% errors in rmse. Therefore, our method possesses apparent advantages compared to other methods. Experimental results in real examples demonstrated the effectiveness of our method.
Limitations and future research
Our proposed algorithms work under several assumptions. We assume noise has no significant effect on the reconstruction process. In the future, we can use noise representation in the latent space. We will consider the spatial interaction effects resulting from the optical system in the future. In this study, we assume the exposure for each photo-transistor is independent. Therefore, we convert a picture to a 1D array in the model learning process to save computation time. In the future, we will use 2D array modeling.
The authors declare they have not used artificial intelligence (AI) tools in the creation of this article.
This work was supported in part by the National Science and Technology Council, Taiwan, under Grant Numbers MOST 110-2622-E-992-026, MOST 111-2221-E-037-007, NSTC 112-2218-E-992-004, and NSTC 112-2221-E-153-004. The authors also thank Mr. Huang, Yuan's General Hospital (ST110006), and NSYSU-KMU Joint Research Project (#NSYSU-KMU-112-P10).
The authors have no competing interests to declare.
[1] |
R. Hering, Oscillations in Lotka-Volterra systems of chemical reactions, J. Math. Chem., 5 (1990), 197–202. https://doi.org/10.1007/BF01166429 doi: 10.1007/BF01166429
![]() |
[2] |
G. Laval, R. Pellat, M. Perulli, Study of the disintegration of Langmuir waves, Plasma Physics, 11 (1969), 579–588. https://dx.doi.org/10.1088/0032-1028/11/7/003 doi: 10.1088/0032-1028/11/7/003
![]() |
[3] | F. Busse, Transition to Turbulence Via the Statistical Limit Cycle Route, (eds H. Haken) Chaos and Order in Nature. Springer Series in Synergetics, Berlin: Springer, 1981. https://doi.org/10.1007/978-3-642-68304-6_4 |
[4] |
S. Solomon, P. Richmond, Stable power laws in variable economies; Lotka-Volterra implies Pareto-Zipf, Eur. Phys. J. B., 27 (2002), 257–261. https://doi.org/10.1140/epjb/e20020152 doi: 10.1140/epjb/e20020152
![]() |
[5] |
M. Carfora, I. Torcicollo, Cross-Diffusion-Driven instability in a predator-prey system with fear and group defense, Mathematics, 8 (2020), 1244. https://doi.org/10.3390/math8081244 doi: 10.3390/math8081244
![]() |
[6] |
J. Chen, X. He, F. Chen, The influence of fear effect to a discrete-time predator-prey system with predator has other food resource, Mathematics, 9 (2021), 865. https://doi.org/10.3390/math9080865 doi: 10.3390/math9080865
![]() |
[7] |
H. Chen, C. Zhang, Dynamic analysis of a Leslie-Gower-type predator-prey system with the fear effect and ratio-dependent Holling III functional response, Nonlinear Anal.-Model. Control, 27 (2022), 904–926. https://doi.org/10.15388/namc.2022.27.27932 doi: 10.15388/namc.2022.27.27932
![]() |
[8] | W. Allee, Animal Aggregations: A Study in General Sociology, Chicago: University of Chicago Press, 1931. https://doi.org/10.5962/bhl.title.7313 |
[9] |
D. Jonhson, A. Liebhold, P. Tobin, O. Bjørnstad, Allee effect and pulsed invasion of the gypsy moth, Nature, 444 (2006), 361–363. https://doi.org/10.1038/nature05242 doi: 10.1038/nature05242
![]() |
[10] |
E. Angulo, G. Roemer, L. Berec, J. Gascoigne, F. Courchamp, Double Allee effects and extinction in the island fox, Nature, 21 (2007), 1082–1091. https://doi.org/10.1111/j.1523-1739.2007.00721.x doi: 10.1111/j.1523-1739.2007.00721.x
![]() |
[11] |
H. Davis, C. Taylor, J. Lambrinos, D. Strong, Pollen limitation causes an Allee effect in a wind-pollinated invasive grass (Spartina alterniflora), PNAS, 101 (2004), 13804–13807. https://doi.org/10.1073/pnas.0405230101 doi: 10.1073/pnas.0405230101
![]() |
[12] |
C. Taylor, A. Hastings, Finding optimal control strategies for invasive species: a density-structured model for Spartina alterniflora, J. Appl. Ecol., 41 (2004), 1049–1057. https://doi.org/10.1111/j.0021-8901.2004.00979.x doi: 10.1111/j.0021-8901.2004.00979.x
![]() |
[13] |
C. Celik, O. Duman, Allee effect in a discrete-time predator-prey system, Chaos Soliton. Fract., 40 (2009), 1956–1962. https://doi.org/10.1016/j.chaos.2007.09.077 doi: 10.1016/j.chaos.2007.09.077
![]() |
[14] |
H. Merdan, O. Duman, On the stability analysis of a general discrete-time population model involving predation and Allee effects, Chaos Soliton. Fract., 40 (2009), 1169–1175. https://doi.org/10.1016/j.chaos.2007.08.081 doi: 10.1016/j.chaos.2007.08.081
![]() |
[15] |
O. Duman, H. Merdan, Stability analysis of continuous population model involving predation and Allee effect, Chaos Soliton. Fract., 41 (2009), 1218–1222. https://doi.org/10.1016/j.chaos.2008.05.008 doi: 10.1016/j.chaos.2008.05.008
![]() |
[16] |
S. Zhou, Y. Liu, G. Wang, The stability of predator-prey systems subject to the Allee effects, Theor. Popul. Biol., 67 (2005), 23–31. https://doi.org/10.1016/j.tpb.2004.06.007 doi: 10.1016/j.tpb.2004.06.007
![]() |
[17] |
H. Merdan, Stability analysis of a Lotka-Volterra type predator-prey system involving Allee effects, Anziam J., 52 (2011), 139–145. https://doi.org/10.21914/anziamj.v52i0.3418 doi: 10.21914/anziamj.v52i0.3418
![]() |
[18] |
X. Guan, Y. Liu, X. Xie, Stability analysis of a Lotka-Volterra type predator-prey system with Allee effect on the predator species, Commun. Math. Biol. Neurosci., 2018 (2018), Article ID 9. https://doi.org/10.28919/cmbn/3654 doi: 10.28919/cmbn/3654
![]() |
[19] |
F. Chen, X. Guan, X. Huang, H. Deng, Dynamic behaviors of a Lotka-Volterra type predator-prey system with Allee effect on the predator species and density dependent birth rate on the prey species, Open Math., 17 (2019), 1186–1202. https://doi.org/10.1515/math-2019-0082 doi: 10.1515/math-2019-0082
![]() |
[20] |
J. Wang, J. Shi, J. Wei, Predator-prey system with strong Allee effect in prey, J. Math. Biol., 62 (2011), 291–331. https://doi.org/10.1007/s00285-010-0332-1 doi: 10.1007/s00285-010-0332-1
![]() |
[21] |
E. González-Olivares, J. Mena-Lorca, A. Rojas-Palma, Dynamical complexities in the Leslie-Gower predator-prey model as consequences of the Allee effect on prey, Appl. Math. Model., 35 (2011), 366–381. https://doi.org/10.1016/j.apm.2010.07.001 doi: 10.1016/j.apm.2010.07.001
![]() |
[22] |
B. Dennis, Allee effects: population growth, critical density, and the chance of extinction, Nat. Resour Model., 3 (1989), 481–538. https://doi.org/10.1111/j.1939-7445.1989.tb00119.x doi: 10.1111/j.1939-7445.1989.tb00119.x
![]() |
[23] |
C. Zhang, W. Yang, Dynamic behaviors of a predator–prey model with weak additive Allee effect on prey, Nonlinear Anal.-Real., 55 (2020), 103137. https://doi.org/10.1016/j.nonrwa.2020.103137 doi: 10.1016/j.nonrwa.2020.103137
![]() |
[24] |
C. Ke, M. Yi, Y. Guo, Qualitative analysis of a spatiotemporal prey-predator model with additive Allee effect and fear effect, Complexity, 2022 (2022), Article ID 5715922. https://doi.org/10.1155/2022/5715922 doi: 10.1155/2022/5715922
![]() |
[25] |
L. Chen, T. Liu, F. Chen, Stability and bifurcation in a two-patch model with additive Allee effect, AIMS Math., 7 (2022), 536–551. doi: 10.3934/math.2022034 doi: 10.3934/math.2022034
![]() |
[26] |
X. He, Z. Zhu, J. Chen, F. Chen, Dynamical analysis of a Lotka Volterra commensalism model with additive Allee effect, Open Math., 20 (2022), 646–665. https://doi.org/10.1515/math-2022-0055 doi: 10.1515/math-2022-0055
![]() |
[27] |
M. Hamilton, O. Burger, J. DeLong, J. Brown, Population stability, cooperation, and the invasibility of the human species, PNAS, 106 (2009), 12255–12260. https://doi.org/10.1073/pnas.0905708106 doi: 10.1073/pnas.0905708106
![]() |
[28] |
J. Jacobs, Cooperation, optimal density and low density thresholds: yet another modification of the Logistic model, Oecologia, 64 (1984), 389–395. https://doi.org/10.1007/BF00379138 doi: 10.1007/BF00379138
![]() |
[29] | R. Lande, S. Engen, B. Saether, Stochastic Population Dynamics in Ecology and Conservation, London: Oxford Univ. Press, 2003. https://doi.org/10.1093/acprof: oso/9780198525257.001.0001 |
[30] |
Y. Zhang, Y. Fan, M. Liu, Analysis of a stochastic single‑species model with intraspecific cooperation, Methodol. Comput. Appl., 24 (2022), 3101–3120. https://doi.org/10.1007/s11009-022-09957-y doi: 10.1007/s11009-022-09957-y
![]() |
method | SRF | rmse | rrmse |
ours | CIE | 0.029766 | 0.069987 |
Measured | 0.037874 | 0.098941 | |
Generated | 0.038611 | 0.12115 | |
[1] | CIE | 0.052838 | 0.1111 |
Measured | 0.04472 | 0.15981 | |
Generated | 0.08846 | 0.16791 | |
rmse=root mean squared error; rrmse=relative root mean squared error. |
method | SRF | rmse | rrmse |
ours | CIE | 0.029766 | 0.069987 |
Measured | 0.037874 | 0.098941 | |
Generated | 0.038611 | 0.12115 | |
[1] | CIE | 0.052838 | 0.1111 |
Measured | 0.04472 | 0.15981 | |
Generated | 0.08846 | 0.16791 | |
rmse=root mean squared error; rrmse=relative root mean squared error. |