
In this note, by using basic properties of the recently introduced concepts of generalized metric spaces, new conditions for the existence of a fixed point for weakly type contractive operator which sends a closed subset into the ambient space under consideration are examined. Our obtained result extends and unifies its corresponding ideas in metric and modular spaces. A comparative non-trivial example is provided to show the novelty and preeminence of our proposed notion.
Citation: Mohammed Shehu Shagari, Faryad Ali, Trad Alotaibi, Shazia Kanwal, Akbar Azam. A fixed point result of weakly contractive operators in generalized metric spaces[J]. AIMS Mathematics, 2022, 7(9): 17603-17611. doi: 10.3934/math.2022969
[1] | Yanlin Li, Nasser Bin Turki, Sharief Deshmukh, Olga Belova . Euclidean hypersurfaces isometric to spheres. AIMS Mathematics, 2024, 9(10): 28306-28319. doi: 10.3934/math.20241373 |
[2] | Dan Yang, Jinchao Yu, Jingjing Zhang, Xiaoying Zhu . A class of hypersurfaces in En+1s satisfying Δ→H=λ→H. AIMS Mathematics, 2022, 7(1): 39-53. doi: 10.3934/math.2022003 |
[3] | Wei Xu, Jingjing Liu, Jinman Li, Hua Wang, Qingtai Xiao . A novel hybrid intelligent model for molten iron temperature forecasting based on machine learning. AIMS Mathematics, 2024, 9(1): 1227-1247. doi: 10.3934/math.2024061 |
[4] | Hanan Alohali, Sharief Deshmukh . Some generic hypersurfaces in a Euclidean space. AIMS Mathematics, 2024, 9(6): 15008-15023. doi: 10.3934/math.2024727 |
[5] | George Rosensteel . Differential geometry of collective models. AIMS Mathematics, 2019, 4(2): 215-230. doi: 10.3934/math.2019.2.215 |
[6] | Daniel Gerth, Bernd Hofmann . Oversmoothing regularization with ℓ1-penalty term. AIMS Mathematics, 2019, 4(4): 1223-1247. doi: 10.3934/math.2019.4.1223 |
[7] | Tingting Xu, Zaiyong Feng, Tianyang He, Xiaona Fan . Sharp estimates for the p-adic m-linear n-dimensional Hardy and Hilbert operators on p-adic weighted Morrey space. AIMS Mathematics, 2025, 10(6): 14012-14031. doi: 10.3934/math.2025630 |
[8] | Melek Erdoğdu, Ayșe Yavuz . On differential analysis of spacelike flows on normal congruence of surfaces. AIMS Mathematics, 2022, 7(8): 13664-13680. doi: 10.3934/math.2022753 |
[9] | Sharief Deshmukh, Mohammed Guediri . Characterizations of Euclidean spheres. AIMS Mathematics, 2021, 6(7): 7733-7740. doi: 10.3934/math.2021449 |
[10] | Yong-Ki Ma, N. Valliammal, K. Jothimani, V. Vijayakumar . Solvability and controllability of second-order non-autonomous impulsive neutral evolution hemivariational inequalities. AIMS Mathematics, 2024, 9(10): 26462-26482. doi: 10.3934/math.20241288 |
In this note, by using basic properties of the recently introduced concepts of generalized metric spaces, new conditions for the existence of a fixed point for weakly type contractive operator which sends a closed subset into the ambient space under consideration are examined. Our obtained result extends and unifies its corresponding ideas in metric and modular spaces. A comparative non-trivial example is provided to show the novelty and preeminence of our proposed notion.
Hyperspectral images (HSI) can be derived from ordinary RGB cameras [1,2,3,4,5,6]. However, existing methods require detailed optical properties of the underlying camera, which can only be measured in a laboratory with expensive equipment, preventing ordinary users from using a plug-and-play webcam immediately.
This study proposes a calibration-free and illumination-independent method, as shown in Figure 1, to facilitate ordinary users' instant use of an arbitrary camera. Our algorithm can transform an ordinary webcam into an expensive HSI camera without the help of additional hardware. Mathematically, the forward transformation, mapping from high-dimensional HSI images to low-dimensional RGB images, is relatively easy, compared to the reverse one, mapping from low- to high-dimensional.
However, both the forward and reverse transformations involve a device-dependent response function for the underlying camera. In previous methods, the camera and light source must remain the same, and the spectral response function (SRF) of a camera and the current illuminating function (CIF) of the ambiance must be identified separately by specific equipment, such as standardized light sources, color-checkers, and spectrometers. The images used in the training database must match the same camera and light source. Such limitations prevent the algorithm from being plug-and-play.
Hyperspectral technology has been widely used in different fields [7]. Although HSI imaging possess tremendous advantages in wide fields of applications, the extremely high acquisition cost (USfanxiexian_myfh40 thousand or above) limits the suitable applications and usage scenarios. Existing computation methods converting RGB to HSI can significantly broaden the applicability of HSI.
In this study, a semi-finished model, independent of devices and illumination, is shipped with the installation package, as shown in the Model part of Figure 1. Before use, a setup step is performed without additional hardware to extract the SRF of the underlying camera, as shown in step 1 in Figure 1. During each usage, the ambient CIF must be estimated for each taken image, as shown in step 2 in Figure 1. The final reconstruction with SRF and CIF then projects RGB images to the device and illumination-independent HSI.
To accumulate sparse projection kernels of hyperspectral signatures in terms of many hyperspectral priors, we render the camera SRF through a most probable algorithm. As shown in Figure 2, at the setup step, two parametric tasks are accomplished automatically without the involvement of users. SRF and CIF are estimated by uploading random images taken by the target camera to the cloud. There is no need to involve a calibration step for ordinary users. We develop a triangular generative adversarial assimilation comprising two different methods in supervising and unsupervised learning, each with advantages, pulling each other up to make a high-precision reconstruction.
Despite the extensive studies in reconstructing HSI from RGB images, numerous challenges remain unsolved. This study seeks to close three research gaps. Existing studies require a laboratory-calibrated camera, and the training is only bound to that specific camera. Our results contribute to the online retrieval of SRF and the SRF independent training offline.
Much research has contributed to reverse mapping from the low-dimensional RGB space to the high-dimensional hyperspectral space [1,4,6,8,9]. However, gaps still exist.
The essence of the forward problem is a spatial convolution between incident image spectral and camera optics [10]. The properties of camera spectral response functions have been studied extensively [11]. The reverse mapping in algebraic or iterative approximation methods is sufficient as long as the camera response function can be correctly acquired before applying the algorithms [4,6]. The reverse mapping in kernel representation is prevalent and stable [1,8,9]. Statistical inference methods have been reported effective in enhancing the accuracy of the reverse mapping [2,3].
With the assistance of additional hardware, the recovered image can be more accurate [5,12,13,14]. However, such assistance does not fit the goal of this study. Therefore, the last piece of the puzzle for the HSI reconstruction is still acquiring camera response functions without the help of expensive instruments.
The second research gap leads to a challenge in algorithm design. Because the central part of the forward problem involves the convolution in the camera SRF, the training algorithm may only retrieve the function mapping if the SRF is determined at the training phase. On the other hand, estimating the SRF is difficult if we do not have the pairs of images with input RGB and output HSI [15]. Our problem is to retrieve the HSI without training pairs and a system model. The original generative adversarial network (GAN) requires a pair of images (x,y) for discriminator D and generator G to evaluate minGmaxDV(D,G)=Ex≈pdata(x)[logD(x|y)]+Ez≈pz(z)[log1−D(G(z|y))]. Thanks to modern progress of the maximum likelihood theorems and the sparse variation of the GAN, "double unknown retrieval" of those above has become possible [16,17,18]. One of the advantages of cycleGAN over standard GANs is that the training data need not be paired. This implies that, in our situation, only the images (RGB, HSI) are sufficient for estimating the output SRFs. The idea of conditional and cyclic adversarial networks with image-to-image translation has successfully solved the problem of double unknown retrieval [19].
When the training data are not completely paired, the latent space reconstruction becomes the key part of the double unknown retrieval [20,21,22]. Convolution particle filters have been proven effective in estimating unknown parameters in state-space [23].
The generative network synthesizes the SRFs based on a predictive distribution conditioned on the correctness of the recovery of the original RGB values [19,24]. When the projection to the latent space is not linear, some studies apply advanced algorithms to address the nonlinearity issue [25].
The third gap in the challenges of HSI reconstruction is the interference of versatile illumination conditions. Many studies tried to decouple the influence of illuminations. Despite the simplicity, an accurate spectral reflectance reconstruction must come with a spectral estimation of illumination [13]. Some solve the issue at the tristimulus value space (RGB or CIE XYZ) by collecting images of multi-conditional illumination [26,27,28]. On the other hand, CIE XYZ starts from the spectral response of illumination; and, nevertheless, they primarily focus on the color appearance for our bare eyes, not the spectral reflectance of objects. Therefore, a research gap exists in decoupling the influence of unknown illumination for spectral response estimation algorithms.
The sparse assimilation algorithm can reconstruct the parameters from limited observations [29]. However, most iterative algorithms are computationally intensive. Thus, they are not suitable for applying online and a tractable algorithm is needed for computational feasibility. We use hierarchical Bayesian learning and Metropolis-Hastings algorithms in the algorithm [30,31] to estimate joint probability densities. Therefore, this study exploits an assimilation method adapted to online computation.
As shown in Figure 3, a camera with response function SRF Rϕ(λ,i) takes a spectral reflectance input h(j,λ) for wavelength λ∈[1,N] and produces the tristimulus values c(j,i) for i∈[RGB]or[1,3], and the j∈[1,n] is the j-th dataset pair in the database. The 24-patches' color-checker pixels have been arranged into a column vector for ease of dimension expression.
The forward problem is stated as a dichromatic reflection model (DRM)
h(λ)=E(λ)ρ(λ),∀λ,c(j,i)=∑λh(j,λ)R(λ,i),∀j,i, | (3.1) |
or,
Cn×3=Hn×NRN×3,for the j-th image in the database. | (3.2) |
where the matrix form is a collection of the indexes such that Cn×3={c(j,i)}j∈[1,n],i∈[1,3].
By the representation theorem, we can find coefficients ˜w(k,k′)k,k′∈[1,m]=˜W for taking m basis such that the RGB images C can be approximated by
Cn×3≈˜Wn×mΨm×NRϕN×3. | (3.3) |
The DRM (3.2) becomes
˜WΨRϕ=HRϕ | (3.4) |
Therefore,
H=˜WΨ | (3.5) |
As discussed in the previous section, many methods exist to estimate the ill-posed back projection, but challenges still exist. Our problem is an inverse one. Given the tristimulus values C, we want to reconstruct the H. Because the dimension of H is much higher than that of C, the inverse project is ill-posed. Our method requires no pre-built calibration and prevents over-fitting after training.
The CIF is estimated from the images of the white block of the color checker. The illumination light e is converted to electric signals through the camera response eϕ, and, therefore, e(λ)=∑ieϕ(λ,i). Utilizing the above-mentioned method, the CIF can be decomposed by a set of kernel functions. We have
e(λ)=∑ieϕ(λ,i)=∑i,kγi,kbk(λ,i),∀λ∈[1,N], | (3.6) |
where bk(λ) is the k-th basis for the CIF, and γi,k is the coefficient.
Classical generative adversarial network applications use neural networks entirely for classification and regression applications. Such modeling suffers from the convergence problem. In our method, we use optical models as generators and neural networks as discriminators to maximize the knowledge of prior model information for the stochastic process of Lambert reflectance (Eq 3.2), which is implemented through a Monte Carlo radiative simulation. Through collected priors, the discriminator can distinguish between successful or unsuccessful generations. Therefore, the cyclic iteration can converge fast and accurately estimate the latent sparse space with sparse converging metrics.
We hybridize a statistical generator and neural network discriminator to maximize the usage of prior model information for the stochastic process. Our generator can also take environment conditions as covariates in the random process and is robust to anti-symmetric station distribution.
A standard GAN comprises a generator G and a discriminator D [32]. The model G and D can be neural networks or any mathematical functions, as long as the optimization (3.7) with Lagrangian η has solutions.
G∗=argminGmaxDLGAN(G,D)+ηLl1(G). | (3.7) |
The standard loss functions have the form
LGAN(G,D)=EZ,Z′[logD(Z,Z′)]+EZ,Z1[log(1−D(Z,G(Z,Z1)))], | (3.8) |
Ll1(G)=EZ,Z′,Z1[‖Z′−G(Z,Z1)‖1]. | (3.9) |
The GAN used in this study is a type of cycleGAN, which contains 2 generators and 2 discriminators, as shown in Figure 4. Our cycleGAN takes two sets of images as input and obtains the output containing the corresponding SRF. The observation Z contains the pair C3=(R,G,B) and H33 33-band HSI images. The subscripts 3 and 33 represent the number of bands for the variables, and they can be ignored if the meaning is clear. The latent estimation represents the SRF Z′=(R′33,H′33,C′3). In the target domain, the Z′ is compared to a small set of collected SRFs Z1=R133, which are only served as shape templates. The re-estimated image pairs are Z″=(C″3,H″33), where C″3=H′33R′33, H′33 is the coefficients of GZZ′, and H″33 is the direct output of GZ′Z.
The models in the cycleGAN are GZZ′, DZ′Z1, GZ′Z, and DZZ″, respectively. The cycleGAN model takes C3 and H33 as input dataset. We first prepare a set of (RGB, HSI) pairs for different color patches and a known camera (e.g., CIE 1964). Because we expected the same color patch to produce the same HSI image through different cameras, we prepared additional image pairs by taking the image of the target unknown camera as RGB and the HSI of the known camera as the HSI. We also need small samples of normal SRFs Z1=R133 as a target template to prevent multiple solutions that are dissimilar to normal SRFs.
The generator GZZ′ is a reverse model, which takes Z=(H,C) in the source domain and outputs Z′=(R′,H′,C′) in the target domain. During the generation, GZZ′ synthesizes SRFs Z′=R′, which is equivalent to applying a perturbation Δ such that R′=RΔ without violating the modality (Eq 3.10) and positiveness (Eq 3.11) constraints.
{Ri(λk+1)>Ri(λk),k=1,…,mi−1,Ri(λk+1)≤Ri(λk),k=mi,…,33, | (3.10) |
Ri(λk)≥0,k=1,…,33, | (3.11) |
where i={1,2,3} is the RGB channel, mi is a predefined mode number (peak location) for channel i, and λk is the wavelength at each band. To integrate the constraints (3.10) to the loss function of the first discriminator D(Z′,Z1), the constraints can be written in a form of violation score.
ζd=∑i=1,2,3∑k=1,…,33sgn(k−mi){Ri(λk+1)−Ri(λk)}−Ri(λk), | (3.12) |
where sgn(s)={1,−1} if s>0 and s≤0, respectively.
The synthesized SRFs Z′ will be rejected by DZ′Z1 if they are dissimilar to normal SRFs Z1=R133. The discriminator DZ′Z1 is directly formed by a residual neural network (ResNet34). The network adds some jump connections that skip internal layers to avoid the vanishing gradient problem and the accuracy saturation problem. The construction of ResNet repeats a fixed pattern several times in which the strided convolution downsampler jumps, bypassing every 2 convolutions.
The GZ′Z is the forward model, which generates the re-estimated image pairs Z″=(C″3,H″33). The C″3 is directly applied to the estimated SRF R′ from the high-dimensional H′33. The re-estimated RGB images are computed as C″3=H′33R′33, where H′33 is the one of the output of GZZ′. The re-estimated HIS images H″33 are the output of GZ′Z taking input from C′3, another output of GZZ′.
Finally, a ResNet34 DZZ″, which takes the original images Z and the re-estimated images Z″, rejects those having large errors. The discriminator DZ′Z1 uses probability-based loss function, and DZZ″ uses a simple loss function in the mean squared error (mse) between the expected and predicted outputs.
The loss functions for the two discriminators are expressed in scores
D(Z′,Z1)=(pd−1)2+p2d+η1(R′−R1)2R′+η2ζd, | (3.13) |
D(Z,Z′′)=(C−C′′)2C+(H−H′′)2H, | (3.14) |
where pd is the probability output of the discriminator (Z′,Z1), and η1 and η2 are the scaling factors controlling the involvement of the similarity and constraints, respectively. When the cycleGAN converges, the error scores (3.13) and (3.14) between Z′ and Z1, Z and Z″, respectively, will be minimized (The convergence trend of the scores can be referred in Figure 9).
To increase the accuracy of back projection, we use a kernel method to decompose the HSI and tristimulus RGB images. The estimated projection over the kernels can further minimize the reconstruction errors [33]. We decompose the HSI in the DRM (3.2) into a set of kernels ψ(λ,λ′)λ=1,⋯,N,λ′=1,⋯,N,
h(x,j,λ)=∑λ′˜H(x,j,λ)ψ(λ,λ′),∀x=[1,n],j=[1,m],orH(j)n×N=˜H(j)n×NΨN×N. | (3.15) |
At the training stage, images are indexed by j∈[1,m]. We aim to find a set of kernels ˜H(j)={˜h(x,j,λ)}x∈[1,n],λ∈[1,N] that can span the tristimulus space.
˜H can be estimated by Yule-Walker equation [34] or a Nadaraya-Watson kernel estimator [35]. The kernels Ψ are chosen in the reproducing kernel Hilbert space, guaranteeing that the basis vectors exist [33].
By the representation theorem, the tristimulus values can be approximated by the transformed kernels ∑λ˜h(x,j,λ)R(λ,i) and the coefficients ˜w(x,x′)x,x′∈[1,nm]=˜W such that
C(x,j,i)≈∑λ˜w(x,x)˜h(x,j,λ)R(λ,i),forx∈[1,n],j∈[1,m],i∈[1,3]. | (3.16) |
We decompose an RGB image by approximating
C(j)n×3≈˜W(j)n×NΨN×NRN×3. | (3.17) |
Together with the DRM (3.2), this implies that
H(j)n×N≈˜W(j)n×NΨN×N. | (3.18) |
Because the back projection ˜W may not align with the direction of the covariance of Ψ(j)R, we need to make additional assumptions in l1 space. Assuming the tristimulus space is far smaller than one of the HSI space, the sparse partial least square regression partitions the space into two disjoint subspaces such that H=(H1,H2) spanned by relevant (H1) and irrelevant variables (H2) [36,37]. Such partition effectively isolates uncorrelated basis in the latent space.
We aim to find a set of coefficients that maximize the span of hyperspectral space and also try to avoid over-fitting. The goal is that
maxμmin˜W||C−˜WΨRϕ||2+μ{1−α2||˜W||22+α||˜W||1}. | (3.19) |
The challenge of the ill-posed problem is to invert a near-singular problem. The regularization methods in (3.19) with a positive perturbation in l1 can suppress over-fitting effectively.
To avoid over-fitting, we further generalize the problem with an elastic net (3.19) in the Least Absolute Shrinkage and Selection Operator (Lasso) term [38] with Lasso penalty α with l1 and l2 norms, ||⋅||1, ||⋅||2.
Optimization (3.19) pushes the coefficients to zero if the covariates are insignificant due to the l1 properties. The reconstruction efficiency increases when transformation manifests the sparsity properties [39]. The optimization (3.19) is regulated by the Lasso penalty (α=1) or the ridge penalty (α=0), and it takes advantage of the sparse l1 norm in evaluating solutions for the ill-posed problem [40].
If α=0, the optimization in (3.19) reduces to an ordinary generalized matrix inverse, which serves as a comparison basis. The least-square estimation in the objective creates a significant variance when covariates exhibit multicollinearity.
Ridge regression performs optimization to compensate for the multicollinearity problem by finding a balance between variance and bias [41]. The ridge penalty effectively reduces the variance of the identified coefficients [42,43]. The experiments demonstrate that the lasso and ridge estimation effectively reconstruct the HSI without over-fit.
Due to the properties of the l1 space, an optimization (e.g., minx||x||1,s.t.(3.19)) should possess minimal non-zero solutions, and yield strong reconstruction performance if several sparsity properties, such as restricted isometry and incoherence properties, are satisfied [39,40,44]. In our multivariate transformation matrix Ψi, the lasso penalty tends to reduce the coefficients of less important covariates to zero, thus generating more zero solutions and fitting our assumption about the hyperspectral space.
The ill-posed back projection from C to H still contains errors. We propose a machine learning model to further close the gap. In the training step, we aim to solve a model g such that the error
||Hj−g(~Wj)Ψ||2 | (3.20) |
is minimized. This study employs an ensemble of regression learners, including random forest regression, and supports vector regression.
At the query stage, we are ready to retrieve the HSI from the decomposition matrix ˜W of the RGB images Cq.
Hq=g(˜WΨ) | (3.21) |
Our method is robust to a wide range of SRFs. An experiment is designed to prove our algorithm. The reconstructed HSIs by our method are almost identical between the measured SRF and the estimated SRF. Despite the quality of the SRF, the reconstructed HSIs are always of high quality.
We first apply our algorithm in the standard CIE 1964 camera spectral response function (Figure 5 (a)). Our algorithm uses 33 bands from 400 nm to 720 nm. Based on the normal visible spectrum range, the 3-color images have been transformed into 33-color images. The reconstructed hyperspectral images match the original ground truth HSI closely (Figure 5 (b)).
To evaluate the effectiveness of our method, we designed the experiments to compare the results of standard laboratory camera calibration and ambient light conditions. We bought a low-cost camera (under USfanxiexian_myfh30) from the Internet and set-up for laboratory calibration. The environment should mimic the one that consumer users have. A Macbeth color-checker with 24 patches, in Figure 6 (c), was used. The spectral response (λ) of light from these patches was measured by a spectrometer (PhotoResearchTM PR-670). The first two subgraphs (a) and (b) in Figure 6 show the light source and reflectance spectrum of patches and the blue patch of the color checker. As shown in the subgraph (d) in Figure 6, we fit the coefficients of the standard basis to estimate the actual SRF. With the measured SRF, we reconstruct a high-fidelity HSI in Figure 7. The error maps show small errors between the ground truth and reconstructed images. The quantitative errors are articulated in Table 1.
method | SRF | rmse | rrmse |
ours | CIE | 0.029766 | 0.069987 |
Measured | 0.037874 | 0.098941 | |
Generated | 0.038611 | 0.12115 | |
[1] | CIE | 0.052838 | 0.1111 |
Measured | 0.04472 | 0.15981 | |
Generated | 0.08846 | 0.16791 | |
rmse=root mean squared error; rrmse=relative root mean squared error. |
To achieve the goal of being calibration-free, autonomous generating of SRF should be implemented. Without any help of additional hardware, we iteratively generated an SRF by the cycleGAN (Figure 8). Through samples in the iteration in Figure 9, the cycleGAN converges promptly. Despite the slight non-smoothness in the spectral response in R-G-B bands, the reconstruction accuracy is still high. The error maps in Figure 10 still exhibit tiny errors.
As shown in Table 1, the generated SRFs by cycleGAN are accurate. The rmse (root mean squared error) and rrmse (relative root mean squared error) were 0.038 and 0.12, respectively. The convergence is relatively straightforward because the dimension in the latent space is the same as the HSI bands. The estimated SRFs are effective according to the low rmse, both by our kernel projection method or the dictionary learning method in [1].
To further visualize the performance of our reconstruction and generation algorithms, we show the original and the synthesized picture from the recovered HSI side-by-side in Figure 11. It is evident that the two pictures are almost identical, which implies that the reconstructed HSI is sufficiently accurate.
The proposed method is superior to existing methods because it does not require laboratory measurement of SRF for a new camera. The experiments demonstrate that our automatic estimated SRFs are almost identical to the laboratory measurement.
Our generated SRFs are accurate and effective. The reconstructed HSIs with generated SRFs have low rmse, both by our kernel projection method and the existing method. We compared our result to an existing method [1] (Figure 12 and the second row of Table 1).
Our learning process, however, has errors. The reconstructed HSIs from a 30-dollar webcam will not be as accurate as a 40-thousand-dollar camera. In Figure 10, the comparison of RGB shows a large 3% mse error. The maximal errors happened when the images appeared to be reflected by highlights. The highlighted parts are sensitive to unknown artifacts and cause the model confusion in the reconstruction process.
Fortunately, the target application scenarios mainly need cheap solutions and do not care much about accuracy. Moreover, noises, such as modeling errors and light source disturbance, influence the accuracy of SRF identification and HSI reconstruction.
This research contributes to deep reverse analytics by integrating an automatic calibration procedure. This paper offers two contributions targeting HSI reconstruction. The estimated SRFs and CIFs match the results measured by the standard laboratory method, and the estimated HSIs achieve less than 3% errors in rmse. Therefore, our method possesses apparent advantages compared to other methods. Experimental results in real examples demonstrated the effectiveness of our method.
Limitations and future research
Our proposed algorithms work under several assumptions. We assume noise has no significant effect on the reconstruction process. In the future, we can use noise representation in the latent space. We will consider the spatial interaction effects resulting from the optical system in the future. In this study, we assume the exposure for each photo-transistor is independent. Therefore, we convert a picture to a 1D array in the model learning process to save computation time. In the future, we will use 2D array modeling.
The authors declare they have not used artificial intelligence (AI) tools in the creation of this article.
This work was supported in part by the National Science and Technology Council, Taiwan, under Grant Numbers MOST 110-2622-E-992-026, MOST 111-2221-E-037-007, NSTC 112-2218-E-992-004, and NSTC 112-2221-E-153-004. The authors also thank Mr. Huang, Yuan's General Hospital (ST110006), and NSYSU-KMU Joint Research Project (#NSYSU-KMU-112-P10).
The authors have no competing interests to declare.
[1] |
T. Abdeljawad, Fixed points for generalized weakly contractive mappings in partial metric spaces, Math. Comp. Model., 54 (2011), 2923–2927. https://doi.org/10.1016/j.mcm.2011.07.013 doi: 10.1016/j.mcm.2011.07.013
![]() |
[2] | Y. I. Alberand, S. Guerre-Delabrere, Principle of weakly contractive maps in Hilbert spaces, In: I. Gohberg, Y. Lyubich, New results in operator theory and its applications, Operator Theory: Advances and Applications, Basel, 98 (1997), 7–22. https://doi.org/10.1007/978-3-0348-8910-0_2 |
[3] | S. Banach, Sur les opérations dans les ensembles abstraits et leur application aux équations intégrales, Fund. Math., 3 (1922), 133–181. |
[4] | A. Betiuk-Pilarska, T. B. Domınguez, Fixed points for nonexpansive mappings and generalized nonexpansive mappings on Banach lattices, Pure Appl. Func. Anal., 1 (2016), 343–359. |
[5] |
F. S. de Blasi, J. Myjak, S. Reich, A. J. Zaslavski, Generic existence and approximation of fixed points for nonexpansive set-valued maps, Set-Valued Anal., 17 (2009), 97–112. https://doi.org/10.1007/s11228-009-0104-5 doi: 10.1007/s11228-009-0104-5
![]() |
[6] |
P. Chakraborty, B. S. Choudhury, Locally weak version of the contraction mapping principle, Math. Notes, 109 (2021), 859–866. https://doi.org/10.1134/S0001434621050199 doi: 10.1134/S0001434621050199
![]() |
[7] | W. Chistyakov, Modular metric spaces generated by F-modulars, Folia Math., 14 (2008), 3–25. |
[8] |
V. V. Chistyakov, Modular metric spaces, I: Basic concepts, Nonlinear Anal.: Theory Methods Appl., 72 (2010), 1–14. https://doi.org/10.1016/j.na.2009.04.057 doi: 10.1016/j.na.2009.04.057
![]() |
[9] |
P. N. Dutta, B. S. Choudhury, A generalization of contraction principle in metricspaces, Fixed Point Theory Appl., 2008 (2008), 406368. https://doi.org/10.1155/2008/406368 doi: 10.1155/2008/406368
![]() |
[10] |
A. Gholidahaneh, S. Sedghi, V. Parvaneh, Some fixed point results for Perov-Ćirić-Prešić-type F-contractions with application, J. Funct. Spaces, 2020 (2020), 1464125. https://doi.org/10.1155/2020/1464125 doi: 10.1155/2020/1464125
![]() |
[11] |
A. Gholidahneh, S. Sedghi, O. Ege, Z. D. Mitrovic, M. de la Sen, The Meir-Keeler type contractions in extended modular b-metric spaces with an application, AIMS Math., 6 (2021), 1781–1799. https://doi.org/10.3934/math.2021107 doi: 10.3934/math.2021107
![]() |
[12] |
M. A. Khamsi, W. M. Kozlowski, S. Reich, Fixed point theory in modular function spaces, Nonlinear Anal.: Theory Methods Appl., 14 (1990), 935–953. https://doi.org/10.1016/0362-546X(90)90111-S doi: 10.1016/0362-546X(90)90111-S
![]() |
[13] | S. Koshi, T. Shimogaki, On F-norms of quasi-modular spaces, J. Fac. Sci., Hokkaido Univ. Ser. I, Math., 15 (1961), 202–218. |
[14] | W. M. Kozlowski, An introduction to fixed point theory in modular function spaces, In: S. Almezel, Q. Ansari, M. Khamsi, Topics in fixed point theory, Springer, 2014,159–222. https://doi.org/10.1007/978-3-319-01586-6_5 |
[15] | R. Kubota, W. Takahashi, Y. Takeuchi, Extensions of Browder's demiclosedness principle and Reich's lemma and their applications, Pure Appl. Func. Anal., 1 (2016), 63–84. |
[16] |
Y. Lim, Solving the nonlinear matrix equation X=Q+m∑i=1M∗iXδiM∗i via acontraction principle, Linear Algebra Appl., 430 (2009), 1380–1383. https://doi.org/10.1016/j.laa.2008.10.034 doi: 10.1016/j.laa.2008.10.034
![]() |
[17] | S. Mazur, W. Orlicz, On some classes of linear spaces, Stud. Math., 17 (1958), 97–119. |
[18] | Z. D. Mitrovic, S. Radenovic, H. Aydi, A. A. Altasan, C. Ozel, On two new approaches in modular spaces, Ita. J. Pure Appl. Math., 41 (2019), 679–690. |
[19] |
S. S. Mohammed, A. Azam, Integral type contractions of soft set-valued maps with application to neutral differential equation, AIMS Math., 5 (2019), 342–358. https://doi.org/10.3934/math.2020023 doi: 10.3934/math.2020023
![]() |
[20] |
S. S. Mohammed, S. Rashid, K. M. Abualnaja, A. Monairah, On non-linear fuzzy set-valued Θ-contraction with applications, AIMS Math., 6 (2021), 10431–10448. https://doi.org/10.3934/math.2021605 doi: 10.3934/math.2021605
![]() |
[21] | J. Musielak, Orlicz spaces and modular spaces, Lecture Notes in Mathematics, Vol. 1034, Springer-Verlag, 1983. https://doi.org/10.1007/BFb0072210 |
[22] | H. Nakano, Modulared semi-ordered linear spaces, Maruzen Company, 1950. |
[23] |
S. Reich, A. J. Zaslavski, On a class of generalized nonexpansive mappings, Mathematics, 8 (2020), 1085. https://doi.org/10.3390/math8071085 doi: 10.3390/math8071085
![]() |
[24] |
S. Reich, J. Z. Alexander, A fixed point result in generalized metric spaces, J. Anal., 2022. https://doi.org/10.1007/s41478-022-00412-2 doi: 10.1007/s41478-022-00412-2
![]() |
[25] |
B. E. Rhoades, Some theorems on weakly contractive maps, Nonlinear Anal.: Theory Methods Appl., 47 (2001), 2683–2693. https://doi.org/10.1016/S0362-546X(01)00388-1 doi: 10.1016/S0362-546X(01)00388-1
![]() |
[26] |
A. A. Taleb, E. Hanebaly, A fixed point theorem and its applications to integral equations in modular function spaces, Proc. Amer. Math. Soc., 128 (1999), 419–426. https://doi.org/10.1090/S0002-9939-99-05546-X doi: 10.1090/S0002-9939-99-05546-X
![]() |
[27] |
Z. Xue, G. Lv, Remarks of fixed point for (ψ,ϕ)-weakly contractive mappings, J. Math., 2021 (2021), 5561165. https://doi.org/10.1155/2021/5561165 doi: 10.1155/2021/5561165
![]() |
[28] |
S. Yamamuro, On conjugate spaces of Nakano spaces, Trans. Amer. Math. Soc., 90 (1959), 291–311. https://doi.org/10.2307/1993206 doi: 10.2307/1993206
![]() |
method | SRF | rmse | rrmse |
ours | CIE | 0.029766 | 0.069987 |
Measured | 0.037874 | 0.098941 | |
Generated | 0.038611 | 0.12115 | |
[1] | CIE | 0.052838 | 0.1111 |
Measured | 0.04472 | 0.15981 | |
Generated | 0.08846 | 0.16791 | |
rmse=root mean squared error; rrmse=relative root mean squared error. |
method | SRF | rmse | rrmse |
ours | CIE | 0.029766 | 0.069987 |
Measured | 0.037874 | 0.098941 | |
Generated | 0.038611 | 0.12115 | |
[1] | CIE | 0.052838 | 0.1111 |
Measured | 0.04472 | 0.15981 | |
Generated | 0.08846 | 0.16791 | |
rmse=root mean squared error; rrmse=relative root mean squared error. |