
Citation: Ken Shirakawa, Hiroshi Watanabe. Solvability for the non-isothermal Kobayashi–Warren–Carter system[J]. AIMS Mathematics, 2017, 2(1): 161-194. doi: 10.3934/Math.2017.1.161
[1] | Yanlin Li, Nasser Bin Turki, Sharief Deshmukh, Olga Belova . Euclidean hypersurfaces isometric to spheres. AIMS Mathematics, 2024, 9(10): 28306-28319. doi: 10.3934/math.20241373 |
[2] | Dan Yang, Jinchao Yu, Jingjing Zhang, Xiaoying Zhu . A class of hypersurfaces in En+1s satisfying Δ→H=λ→H. AIMS Mathematics, 2022, 7(1): 39-53. doi: 10.3934/math.2022003 |
[3] | Wei Xu, Jingjing Liu, Jinman Li, Hua Wang, Qingtai Xiao . A novel hybrid intelligent model for molten iron temperature forecasting based on machine learning. AIMS Mathematics, 2024, 9(1): 1227-1247. doi: 10.3934/math.2024061 |
[4] | Hanan Alohali, Sharief Deshmukh . Some generic hypersurfaces in a Euclidean space. AIMS Mathematics, 2024, 9(6): 15008-15023. doi: 10.3934/math.2024727 |
[5] | George Rosensteel . Differential geometry of collective models. AIMS Mathematics, 2019, 4(2): 215-230. doi: 10.3934/math.2019.2.215 |
[6] | Daniel Gerth, Bernd Hofmann . Oversmoothing regularization with ℓ1-penalty term. AIMS Mathematics, 2019, 4(4): 1223-1247. doi: 10.3934/math.2019.4.1223 |
[7] | Tingting Xu, Zaiyong Feng, Tianyang He, Xiaona Fan . Sharp estimates for the p-adic m-linear n-dimensional Hardy and Hilbert operators on p-adic weighted Morrey space. AIMS Mathematics, 2025, 10(6): 14012-14031. doi: 10.3934/math.2025630 |
[8] | Melek Erdoğdu, Ayșe Yavuz . On differential analysis of spacelike flows on normal congruence of surfaces. AIMS Mathematics, 2022, 7(8): 13664-13680. doi: 10.3934/math.2022753 |
[9] | Sharief Deshmukh, Mohammed Guediri . Characterizations of Euclidean spheres. AIMS Mathematics, 2021, 6(7): 7733-7740. doi: 10.3934/math.2021449 |
[10] | Yong-Ki Ma, N. Valliammal, K. Jothimani, V. Vijayakumar . Solvability and controllability of second-order non-autonomous impulsive neutral evolution hemivariational inequalities. AIMS Mathematics, 2024, 9(10): 26462-26482. doi: 10.3934/math.20241288 |
Hyperspectral images (HSI) can be derived from ordinary RGB cameras [1,2,3,4,5,6]. However, existing methods require detailed optical properties of the underlying camera, which can only be measured in a laboratory with expensive equipment, preventing ordinary users from using a plug-and-play webcam immediately.
This study proposes a calibration-free and illumination-independent method, as shown in Figure 1, to facilitate ordinary users' instant use of an arbitrary camera. Our algorithm can transform an ordinary webcam into an expensive HSI camera without the help of additional hardware. Mathematically, the forward transformation, mapping from high-dimensional HSI images to low-dimensional RGB images, is relatively easy, compared to the reverse one, mapping from low- to high-dimensional.
However, both the forward and reverse transformations involve a device-dependent response function for the underlying camera. In previous methods, the camera and light source must remain the same, and the spectral response function (SRF) of a camera and the current illuminating function (CIF) of the ambiance must be identified separately by specific equipment, such as standardized light sources, color-checkers, and spectrometers. The images used in the training database must match the same camera and light source. Such limitations prevent the algorithm from being plug-and-play.
Hyperspectral technology has been widely used in different fields [7]. Although HSI imaging possess tremendous advantages in wide fields of applications, the extremely high acquisition cost (USfanxiexian_myfh40 thousand or above) limits the suitable applications and usage scenarios. Existing computation methods converting RGB to HSI can significantly broaden the applicability of HSI.
In this study, a semi-finished model, independent of devices and illumination, is shipped with the installation package, as shown in the Model part of Figure 1. Before use, a setup step is performed without additional hardware to extract the SRF of the underlying camera, as shown in step 1 in Figure 1. During each usage, the ambient CIF must be estimated for each taken image, as shown in step 2 in Figure 1. The final reconstruction with SRF and CIF then projects RGB images to the device and illumination-independent HSI.
To accumulate sparse projection kernels of hyperspectral signatures in terms of many hyperspectral priors, we render the camera SRF through a most probable algorithm. As shown in Figure 2, at the setup step, two parametric tasks are accomplished automatically without the involvement of users. SRF and CIF are estimated by uploading random images taken by the target camera to the cloud. There is no need to involve a calibration step for ordinary users. We develop a triangular generative adversarial assimilation comprising two different methods in supervising and unsupervised learning, each with advantages, pulling each other up to make a high-precision reconstruction.
Despite the extensive studies in reconstructing HSI from RGB images, numerous challenges remain unsolved. This study seeks to close three research gaps. Existing studies require a laboratory-calibrated camera, and the training is only bound to that specific camera. Our results contribute to the online retrieval of SRF and the SRF independent training offline.
Much research has contributed to reverse mapping from the low-dimensional RGB space to the high-dimensional hyperspectral space [1,4,6,8,9]. However, gaps still exist.
The essence of the forward problem is a spatial convolution between incident image spectral and camera optics [10]. The properties of camera spectral response functions have been studied extensively [11]. The reverse mapping in algebraic or iterative approximation methods is sufficient as long as the camera response function can be correctly acquired before applying the algorithms [4,6]. The reverse mapping in kernel representation is prevalent and stable [1,8,9]. Statistical inference methods have been reported effective in enhancing the accuracy of the reverse mapping [2,3].
With the assistance of additional hardware, the recovered image can be more accurate [5,12,13,14]. However, such assistance does not fit the goal of this study. Therefore, the last piece of the puzzle for the HSI reconstruction is still acquiring camera response functions without the help of expensive instruments.
The second research gap leads to a challenge in algorithm design. Because the central part of the forward problem involves the convolution in the camera SRF, the training algorithm may only retrieve the function mapping if the SRF is determined at the training phase. On the other hand, estimating the SRF is difficult if we do not have the pairs of images with input RGB and output HSI [15]. Our problem is to retrieve the HSI without training pairs and a system model. The original generative adversarial network (GAN) requires a pair of images (x,y) for discriminator D and generator G to evaluate minGmaxDV(D,G)=Ex≈pdata(x)[logD(x|y)]+Ez≈pz(z)[log1−D(G(z|y))]. Thanks to modern progress of the maximum likelihood theorems and the sparse variation of the GAN, "double unknown retrieval" of those above has become possible [16,17,18]. One of the advantages of cycleGAN over standard GANs is that the training data need not be paired. This implies that, in our situation, only the images (RGB, HSI) are sufficient for estimating the output SRFs. The idea of conditional and cyclic adversarial networks with image-to-image translation has successfully solved the problem of double unknown retrieval [19].
When the training data are not completely paired, the latent space reconstruction becomes the key part of the double unknown retrieval [20,21,22]. Convolution particle filters have been proven effective in estimating unknown parameters in state-space [23].
The generative network synthesizes the SRFs based on a predictive distribution conditioned on the correctness of the recovery of the original RGB values [19,24]. When the projection to the latent space is not linear, some studies apply advanced algorithms to address the nonlinearity issue [25].
The third gap in the challenges of HSI reconstruction is the interference of versatile illumination conditions. Many studies tried to decouple the influence of illuminations. Despite the simplicity, an accurate spectral reflectance reconstruction must come with a spectral estimation of illumination [13]. Some solve the issue at the tristimulus value space (RGB or CIE XYZ) by collecting images of multi-conditional illumination [26,27,28]. On the other hand, CIE XYZ starts from the spectral response of illumination; and, nevertheless, they primarily focus on the color appearance for our bare eyes, not the spectral reflectance of objects. Therefore, a research gap exists in decoupling the influence of unknown illumination for spectral response estimation algorithms.
The sparse assimilation algorithm can reconstruct the parameters from limited observations [29]. However, most iterative algorithms are computationally intensive. Thus, they are not suitable for applying online and a tractable algorithm is needed for computational feasibility. We use hierarchical Bayesian learning and Metropolis-Hastings algorithms in the algorithm [30,31] to estimate joint probability densities. Therefore, this study exploits an assimilation method adapted to online computation.
As shown in Figure 3, a camera with response function SRF Rϕ(λ,i) takes a spectral reflectance input h(j,λ) for wavelength λ∈[1,N] and produces the tristimulus values c(j,i) for i∈[RGB]or[1,3], and the j∈[1,n] is the j-th dataset pair in the database. The 24-patches' color-checker pixels have been arranged into a column vector for ease of dimension expression.
The forward problem is stated as a dichromatic reflection model (DRM)
h(λ)=E(λ)ρ(λ),∀λ,c(j,i)=∑λh(j,λ)R(λ,i),∀j,i, | (3.1) |
or,
Cn×3=Hn×NRN×3,for the j-th image in the database. | (3.2) |
where the matrix form is a collection of the indexes such that Cn×3={c(j,i)}j∈[1,n],i∈[1,3].
By the representation theorem, we can find coefficients ˜w(k,k′)k,k′∈[1,m]=˜W for taking m basis such that the RGB images C can be approximated by
Cn×3≈˜Wn×mΨm×NRϕN×3. | (3.3) |
The DRM (3.2) becomes
˜WΨRϕ=HRϕ | (3.4) |
Therefore,
H=˜WΨ | (3.5) |
As discussed in the previous section, many methods exist to estimate the ill-posed back projection, but challenges still exist. Our problem is an inverse one. Given the tristimulus values C, we want to reconstruct the H. Because the dimension of H is much higher than that of C, the inverse project is ill-posed. Our method requires no pre-built calibration and prevents over-fitting after training.
The CIF is estimated from the images of the white block of the color checker. The illumination light e is converted to electric signals through the camera response eϕ, and, therefore, e(λ)=∑ieϕ(λ,i). Utilizing the above-mentioned method, the CIF can be decomposed by a set of kernel functions. We have
e(λ)=∑ieϕ(λ,i)=∑i,kγi,kbk(λ,i),∀λ∈[1,N], | (3.6) |
where bk(λ) is the k-th basis for the CIF, and γi,k is the coefficient.
Classical generative adversarial network applications use neural networks entirely for classification and regression applications. Such modeling suffers from the convergence problem. In our method, we use optical models as generators and neural networks as discriminators to maximize the knowledge of prior model information for the stochastic process of Lambert reflectance (Eq 3.2), which is implemented through a Monte Carlo radiative simulation. Through collected priors, the discriminator can distinguish between successful or unsuccessful generations. Therefore, the cyclic iteration can converge fast and accurately estimate the latent sparse space with sparse converging metrics.
We hybridize a statistical generator and neural network discriminator to maximize the usage of prior model information for the stochastic process. Our generator can also take environment conditions as covariates in the random process and is robust to anti-symmetric station distribution.
A standard GAN comprises a generator G and a discriminator D [32]. The model G and D can be neural networks or any mathematical functions, as long as the optimization (3.7) with Lagrangian η has solutions.
G∗=argminGmaxDLGAN(G,D)+ηLl1(G). | (3.7) |
The standard loss functions have the form
LGAN(G,D)=EZ,Z′[logD(Z,Z′)]+EZ,Z1[log(1−D(Z,G(Z,Z1)))], | (3.8) |
Ll1(G)=EZ,Z′,Z1[‖Z′−G(Z,Z1)‖1]. | (3.9) |
The GAN used in this study is a type of cycleGAN, which contains 2 generators and 2 discriminators, as shown in Figure 4. Our cycleGAN takes two sets of images as input and obtains the output containing the corresponding SRF. The observation Z contains the pair C3=(R,G,B) and H33 33-band HSI images. The subscripts 3 and 33 represent the number of bands for the variables, and they can be ignored if the meaning is clear. The latent estimation represents the SRF Z′=(R′33,H′33,C′3). In the target domain, the Z′ is compared to a small set of collected SRFs Z1=R133, which are only served as shape templates. The re-estimated image pairs are Z″=(C″3,H″33), where C″3=H′33R′33, H′33 is the coefficients of GZZ′, and H″33 is the direct output of GZ′Z.
The models in the cycleGAN are GZZ′, DZ′Z1, GZ′Z, and DZZ″, respectively. The cycleGAN model takes C3 and H33 as input dataset. We first prepare a set of (RGB, HSI) pairs for different color patches and a known camera (e.g., CIE 1964). Because we expected the same color patch to produce the same HSI image through different cameras, we prepared additional image pairs by taking the image of the target unknown camera as RGB and the HSI of the known camera as the HSI. We also need small samples of normal SRFs Z1=R133 as a target template to prevent multiple solutions that are dissimilar to normal SRFs.
The generator GZZ′ is a reverse model, which takes Z=(H,C) in the source domain and outputs Z′=(R′,H′,C′) in the target domain. During the generation, GZZ′ synthesizes SRFs Z′=R′, which is equivalent to applying a perturbation Δ such that R′=RΔ without violating the modality (Eq 3.10) and positiveness (Eq 3.11) constraints.
{Ri(λk+1)>Ri(λk),k=1,…,mi−1,Ri(λk+1)≤Ri(λk),k=mi,…,33, | (3.10) |
Ri(λk)≥0,k=1,…,33, | (3.11) |
where i={1,2,3} is the RGB channel, mi is a predefined mode number (peak location) for channel i, and λk is the wavelength at each band. To integrate the constraints (3.10) to the loss function of the first discriminator D(Z′,Z1), the constraints can be written in a form of violation score.
ζd=∑i=1,2,3∑k=1,…,33sgn(k−mi){Ri(λk+1)−Ri(λk)}−Ri(λk), | (3.12) |
where sgn(s)={1,−1} if s>0 and s≤0, respectively.
The synthesized SRFs Z′ will be rejected by DZ′Z1 if they are dissimilar to normal SRFs Z1=R133. The discriminator DZ′Z1 is directly formed by a residual neural network (ResNet34). The network adds some jump connections that skip internal layers to avoid the vanishing gradient problem and the accuracy saturation problem. The construction of ResNet repeats a fixed pattern several times in which the strided convolution downsampler jumps, bypassing every 2 convolutions.
The GZ′Z is the forward model, which generates the re-estimated image pairs Z″=(C″3,H″33). The C″3 is directly applied to the estimated SRF R′ from the high-dimensional H′33. The re-estimated RGB images are computed as C″3=H′33R′33, where H′33 is the one of the output of GZZ′. The re-estimated HIS images H″33 are the output of GZ′Z taking input from C′3, another output of GZZ′.
Finally, a ResNet34 DZZ″, which takes the original images Z and the re-estimated images Z″, rejects those having large errors. The discriminator DZ′Z1 uses probability-based loss function, and DZZ″ uses a simple loss function in the mean squared error (mse) between the expected and predicted outputs.
The loss functions for the two discriminators are expressed in scores
D(Z′,Z1)=(pd−1)2+p2d+η1(R′−R1)2R′+η2ζd, | (3.13) |
D(Z,Z′′)=(C−C′′)2C+(H−H′′)2H, | (3.14) |
where pd is the probability output of the discriminator (Z′,Z1), and η1 and η2 are the scaling factors controlling the involvement of the similarity and constraints, respectively. When the cycleGAN converges, the error scores (3.13) and (3.14) between Z′ and Z1, Z and Z″, respectively, will be minimized (The convergence trend of the scores can be referred in Figure 9).
To increase the accuracy of back projection, we use a kernel method to decompose the HSI and tristimulus RGB images. The estimated projection over the kernels can further minimize the reconstruction errors [33]. We decompose the HSI in the DRM (3.2) into a set of kernels ψ(λ,λ′)λ=1,⋯,N,λ′=1,⋯,N,
h(x,j,λ)=∑λ′˜H(x,j,λ)ψ(λ,λ′),∀x=[1,n],j=[1,m],orH(j)n×N=˜H(j)n×NΨN×N. | (3.15) |
At the training stage, images are indexed by j∈[1,m]. We aim to find a set of kernels ˜H(j)={˜h(x,j,λ)}x∈[1,n],λ∈[1,N] that can span the tristimulus space.
˜H can be estimated by Yule-Walker equation [34] or a Nadaraya-Watson kernel estimator [35]. The kernels Ψ are chosen in the reproducing kernel Hilbert space, guaranteeing that the basis vectors exist [33].
By the representation theorem, the tristimulus values can be approximated by the transformed kernels ∑λ˜h(x,j,λ)R(λ,i) and the coefficients ˜w(x,x′)x,x′∈[1,nm]=˜W such that
C(x,j,i)≈∑λ˜w(x,x)˜h(x,j,λ)R(λ,i),forx∈[1,n],j∈[1,m],i∈[1,3]. | (3.16) |
We decompose an RGB image by approximating
C(j)n×3≈˜W(j)n×NΨN×NRN×3. | (3.17) |
Together with the DRM (3.2), this implies that
H(j)n×N≈˜W(j)n×NΨN×N. | (3.18) |
Because the back projection ˜W may not align with the direction of the covariance of Ψ(j)R, we need to make additional assumptions in l1 space. Assuming the tristimulus space is far smaller than one of the HSI space, the sparse partial least square regression partitions the space into two disjoint subspaces such that H=(H1,H2) spanned by relevant (H1) and irrelevant variables (H2) [36,37]. Such partition effectively isolates uncorrelated basis in the latent space.
We aim to find a set of coefficients that maximize the span of hyperspectral space and also try to avoid over-fitting. The goal is that
maxμmin˜W||C−˜WΨRϕ||2+μ{1−α2||˜W||22+α||˜W||1}. | (3.19) |
The challenge of the ill-posed problem is to invert a near-singular problem. The regularization methods in (3.19) with a positive perturbation in l1 can suppress over-fitting effectively.
To avoid over-fitting, we further generalize the problem with an elastic net (3.19) in the Least Absolute Shrinkage and Selection Operator (Lasso) term [38] with Lasso penalty α with l1 and l2 norms, ||⋅||1, ||⋅||2.
Optimization (3.19) pushes the coefficients to zero if the covariates are insignificant due to the l1 properties. The reconstruction efficiency increases when transformation manifests the sparsity properties [39]. The optimization (3.19) is regulated by the Lasso penalty (α=1) or the ridge penalty (α=0), and it takes advantage of the sparse l1 norm in evaluating solutions for the ill-posed problem [40].
If α=0, the optimization in (3.19) reduces to an ordinary generalized matrix inverse, which serves as a comparison basis. The least-square estimation in the objective creates a significant variance when covariates exhibit multicollinearity.
Ridge regression performs optimization to compensate for the multicollinearity problem by finding a balance between variance and bias [41]. The ridge penalty effectively reduces the variance of the identified coefficients [42,43]. The experiments demonstrate that the lasso and ridge estimation effectively reconstruct the HSI without over-fit.
Due to the properties of the l1 space, an optimization (e.g., minx||x||1,s.t.(3.19)) should possess minimal non-zero solutions, and yield strong reconstruction performance if several sparsity properties, such as restricted isometry and incoherence properties, are satisfied [39,40,44]. In our multivariate transformation matrix Ψi, the lasso penalty tends to reduce the coefficients of less important covariates to zero, thus generating more zero solutions and fitting our assumption about the hyperspectral space.
The ill-posed back projection from C to H still contains errors. We propose a machine learning model to further close the gap. In the training step, we aim to solve a model g such that the error
||Hj−g(~Wj)Ψ||2 | (3.20) |
is minimized. This study employs an ensemble of regression learners, including random forest regression, and supports vector regression.
At the query stage, we are ready to retrieve the HSI from the decomposition matrix ˜W of the RGB images Cq.
Hq=g(˜WΨ) | (3.21) |
Our method is robust to a wide range of SRFs. An experiment is designed to prove our algorithm. The reconstructed HSIs by our method are almost identical between the measured SRF and the estimated SRF. Despite the quality of the SRF, the reconstructed HSIs are always of high quality.
We first apply our algorithm in the standard CIE 1964 camera spectral response function (Figure 5 (a)). Our algorithm uses 33 bands from 400 nm to 720 nm. Based on the normal visible spectrum range, the 3-color images have been transformed into 33-color images. The reconstructed hyperspectral images match the original ground truth HSI closely (Figure 5 (b)).
To evaluate the effectiveness of our method, we designed the experiments to compare the results of standard laboratory camera calibration and ambient light conditions. We bought a low-cost camera (under USfanxiexian_myfh30) from the Internet and set-up for laboratory calibration. The environment should mimic the one that consumer users have. A Macbeth color-checker with 24 patches, in Figure 6 (c), was used. The spectral response (λ) of light from these patches was measured by a spectrometer (PhotoResearchTM PR-670). The first two subgraphs (a) and (b) in Figure 6 show the light source and reflectance spectrum of patches and the blue patch of the color checker. As shown in the subgraph (d) in Figure 6, we fit the coefficients of the standard basis to estimate the actual SRF. With the measured SRF, we reconstruct a high-fidelity HSI in Figure 7. The error maps show small errors between the ground truth and reconstructed images. The quantitative errors are articulated in Table 1.
method | SRF | rmse | rrmse |
ours | CIE | 0.029766 | 0.069987 |
Measured | 0.037874 | 0.098941 | |
Generated | 0.038611 | 0.12115 | |
[1] | CIE | 0.052838 | 0.1111 |
Measured | 0.04472 | 0.15981 | |
Generated | 0.08846 | 0.16791 | |
rmse=root mean squared error; rrmse=relative root mean squared error. |
To achieve the goal of being calibration-free, autonomous generating of SRF should be implemented. Without any help of additional hardware, we iteratively generated an SRF by the cycleGAN (Figure 8). Through samples in the iteration in Figure 9, the cycleGAN converges promptly. Despite the slight non-smoothness in the spectral response in R-G-B bands, the reconstruction accuracy is still high. The error maps in Figure 10 still exhibit tiny errors.
As shown in Table 1, the generated SRFs by cycleGAN are accurate. The rmse (root mean squared error) and rrmse (relative root mean squared error) were 0.038 and 0.12, respectively. The convergence is relatively straightforward because the dimension in the latent space is the same as the HSI bands. The estimated SRFs are effective according to the low rmse, both by our kernel projection method or the dictionary learning method in [1].
To further visualize the performance of our reconstruction and generation algorithms, we show the original and the synthesized picture from the recovered HSI side-by-side in Figure 11. It is evident that the two pictures are almost identical, which implies that the reconstructed HSI is sufficiently accurate.
The proposed method is superior to existing methods because it does not require laboratory measurement of SRF for a new camera. The experiments demonstrate that our automatic estimated SRFs are almost identical to the laboratory measurement.
Our generated SRFs are accurate and effective. The reconstructed HSIs with generated SRFs have low rmse, both by our kernel projection method and the existing method. We compared our result to an existing method [1] (Figure 12 and the second row of Table 1).
Our learning process, however, has errors. The reconstructed HSIs from a 30-dollar webcam will not be as accurate as a 40-thousand-dollar camera. In Figure 10, the comparison of RGB shows a large 3% mse error. The maximal errors happened when the images appeared to be reflected by highlights. The highlighted parts are sensitive to unknown artifacts and cause the model confusion in the reconstruction process.
Fortunately, the target application scenarios mainly need cheap solutions and do not care much about accuracy. Moreover, noises, such as modeling errors and light source disturbance, influence the accuracy of SRF identification and HSI reconstruction.
This research contributes to deep reverse analytics by integrating an automatic calibration procedure. This paper offers two contributions targeting HSI reconstruction. The estimated SRFs and CIFs match the results measured by the standard laboratory method, and the estimated HSIs achieve less than 3% errors in rmse. Therefore, our method possesses apparent advantages compared to other methods. Experimental results in real examples demonstrated the effectiveness of our method.
Limitations and future research
Our proposed algorithms work under several assumptions. We assume noise has no significant effect on the reconstruction process. In the future, we can use noise representation in the latent space. We will consider the spatial interaction effects resulting from the optical system in the future. In this study, we assume the exposure for each photo-transistor is independent. Therefore, we convert a picture to a 1D array in the model learning process to save computation time. In the future, we will use 2D array modeling.
The authors declare they have not used artificial intelligence (AI) tools in the creation of this article.
This work was supported in part by the National Science and Technology Council, Taiwan, under Grant Numbers MOST 110-2622-E-992-026, MOST 111-2221-E-037-007, NSTC 112-2218-E-992-004, and NSTC 112-2221-E-153-004. The authors also thank Mr. Huang, Yuan's General Hospital (ST110006), and NSYSU-KMU Joint Research Project (#NSYSU-KMU-112-P10).
The authors have no competing interests to declare.
[1] | M. Amar, G. Bellettini, A notion of total variation depending on a metric with discontinuous coeffcients. Ann. Inst. H. Poincaré Anal. Non Linéaire, 11 (1994), no. 1, 91-133. |
[2] | L. Ambrosio, N. Fusco, D. Pallara, Functions of Bounded Variation and Free Discontinuity Problems. Oxford Mathematical Monographs. (2000). |
[3] | H. Attouch, G. Buttazzo, G. Michaille, Variational Analysis in Sobolev and BV Spaces. Applications to PDEs and Optimization. MPS-SIAM Series on Optimization, 6. SIAM and MPS, (2006). |
[4] | V. Barbu, Nonlinear semigroups and differential equations in Banach spaces. Editura Academiei Republicii Socialiste România, Noordhoff International Publishing, (1976). |
[5] | G. Bellettini, G. Bouchitté, I. Fragalà, BV functions with respect to a measure and relaxation of metric integral functionals. J. Convex Anal., 6 (1999), no. 2, 349-366. |
[6] | H. Brézis, Operateurs Maximaux Monotones et Semi-groupes de Contractions dans les Espaces de Hilbert. North-Holland Mathematics Studies, 5. Notas de Matemática (50). North-Holland Publishing and American Elsevier Publishing, (1973). |
[7] | P. Colli, P. Laurençot, Weak solutions to the Penrose-Fife phase field model for a class of admissible heat flux laws. Phys. D, 111 (1998), 311-334. |
[8] | P. Colli, J. Sprekels, Glob al solution to the Penrose-Fife phase-field model with zero interfacial energy and Fourier law. Adv. Math. Sci. Appl., 9 (1999), no. 1, 383-391. |
[9] | G. Dal Maso, An Introduction to Γ-convergence. Progress in Nonlinear Differential Equations and their Applications, 8. Birkhäuser Boston, Inc., Boston, Ma, (1993). |
[10] | I. Ekeland, R. Temam, Convex analysis and variational problems. Translated from the French. Corrected reprint of the 1976 English edition. Classics in Applied Mathematics, 28. SIAM, Philadelphia, (1999). |
[11] | L. C. Evans, R. F. Gariepy, Measure Theory and Fine Properties of Functions. Studies in Advanced Mathematics. CRC Press, Boca Raton, (1992). |
[12] | E. Giusti, Minimal Surfaces and Functions of Bounded Variation. Monographs in Mathematics, 80. Birkhäuser, (1984). |
[13] | M.-H. Giga, Y. Giga, Very singular diffusion equations: second and fourth order problems. Jpn. J. Ind. Appl. Math., 27 (2010), no. 3, 323-345. |
[14] | W. Horn, J. Sprekels, S. Zheng, Global existence of smooth solutions to the Penrose-Fife model for Ising ferromagnets. Adv. Math. Sci. Appl., 6 (1996), no. 1, 227-241. |
[15] | A. Ito, N. Kenmochi, N. Yamazaki, A phase-field model of grain boundary motion. Appl. Math., 53 (2008), no. 5, 433-454. |
[16] | A. Ito, N. Kenmochi, N. Yamazaki, Weak solutions of grain boundary motion model with singularity. Rend. Mat. Appl. (7), 29 (2009), no. 1, 51-63. |
[17] | A. Ito, N.Kenmochi, N. Yamazaki, Global solvability of a model for grain boundary motion with constraint. Discrete Contin. Dyn. Syst. Ser. S, 5 (2012), no. 1, 127-146. |
[18] | N. Kenmochi, : Systems of nonlinear PDEs arising from dynamical phase transitions. In: Phase transitions and hysteresis (Montecatini Terme, 1993), pp. 39-86, Lecture Notes in Math., 1584, Springer, Berlin, (1994). |
[19] | N. Kenmochi, M. Kubo, Weak solutions of nonlinear systems for non-isothermal phase transitions. Adv. Math. Sci. Appl., 9 (1999), no. 1, 499-521. |
[20] | N. Kenmochi, N. Yamazaki, Large-time behavior of solutions to a phase-field model of grain boundary motion with constraint. In: Current advances in nonlinear analysis and related topics, pp. 389-403, GAKUTO Internat. Ser. Math. Sci. Appl., 32, Gakkōtosho, Tokyo, (2010). |
[21] | R. Kobayashi, Y. Giga, Equations with singular diffusivity. J. Statist. Phys., 95 (1999), 1187-1220. |
[22] | R. Kobayashi, J. A. Warren, W. C. Carter, A continuum model of grain boundary. Phys. D, 140 (2000), no. 1-2, 141-150. |
[23] | R. Kobayashi, J. A.Warren, W. C. Carter, Grain boundary model and singular diffusivity. In: Free Boundary Problems: Theory and Applications, pp. 283-294, GAKUTO Internat. Ser. Math. Sci. Appl., 14, Gakkōtosho, Tokyo, (2000). |
[24] | J. L. Lions, E. Magenes, Non-Homogeneous Boundary Value Problems and Applications. Vol I. Springer-Verlag, New York-Heidelberg, (1972). |
[25] |
S. Moll, K. Shirakawa, Existence of solutions to the Kobayashi-Warren-Carter system. Calc. Var. Partial Differential Equations, 51 (2014), 621-656. DOI:10.1007/s00526-013-0689-2 doi: 10.1007/s00526-013-0689-2
![]() |
[26] | S. Moll, K. Shirakawa, H.Watanabe, Energy dissipative solutions to the Kobayashi-Warren-Carter system. submitted. |
[27] | U. Mosco, Convergence of convex sets and of solutions of variational inequalities. Advances in Math., 3 (1969), 510-585. |
[28] |
K. Shirakawa, H. Watanabe, Energy-dissipative solution to a one-dimensional phase field model of grain boundary motion. Discrete Contin. Dyn. Syst. Ser. S, 7 (2014), no. 1, 139-159. DOI:10.3934/dcdss.2014.7.139 doi: 10.3934/dcdss.2014.7.139
![]() |
[29] | K. Shirakawa, H. Watanabe, Large-time behavior of a PDE model of isothermal grain boundary motion with a constraint. Discrete Contin. Dyn. Syst. 2015, Dynamical systems, differential equations and applications. 10th AIMS Conference. Suppl., 1009-1018. |
[30] |
K. Shirakawa, H. Watanabe, N. Yamazaki, Solvability of one-dimensional phase field systems associated with grain boundary motion. Math. Ann., 356 (2013), 301-330. DOI:10.1007/s00208-012-0849-2 doi: 10.1007/s00208-012-0849-2
![]() |
[31] | K. Shirakawa, H. Watanabe, N. Yamazaki, Phase-field systems for grain boundary motions under isothermal solidifications. Adv. Math. Sci. Appl., 24 (2014), 353-400. |
[32] | K. Shirakawa, H. Watanabe, N. Yamazaki, Mathematical analysis for a Warren-Kobayashi-Lobkovsky-Carter type system. RIMS Kôkyûroku, 1997 (2016), 64-85. |
[33] | J. Simon, Compact sets in the space Lp(0; T; B). Ann. Mat. Pura Appl. (4), 146 (1987), 65-96. |
[34] | J. Sprekels, S. Zheng, Global existence and asymptotic behaviour for a nonlocal phase-field model for non-isothermal phase transitions. J. Math. Anal. Appl., 279 (2003), 97-110. |
[35] | A. Visintin, Models of phase transitions. Progress in Nonlinear Differential Equations and their Applications, 28, Birkhäuser, Boston, (1996). |
[36] | J. A. Warren, R. Kobayashi, A. E. Lobkovsky, W. C. Carter, Extending phase field models of solidification to polycrystalline materials. Acta Materialia, 51 (2003), 6035-6058. |
[37] | H.Watanabe, K. Shirakawa, Qualitative properties of a one-dimensional phase-field system associated with grain boundary. In: Current Advances in Applied Nonlinear Analysis and Mathematical Modelling Issues, pp. 301-328, GAKUTO Internat. Ser. Math. Sci. Appl., 36, Gakkōtosho, Tokyo, (2013). |
[38] | James A.Warren, Ryo Kobayashi, Alexander E. Lobkovsky, W. Craig Carter, Extending phase field models of solidification to polycrystalline materials. Acta Materialia, 51 (2003), 60356058. |
[39] | H. Watanabe, K. Shirakawa, Stability for approximation methods of the one-dimensional Kobayashi-Warren-Carter system. Mathematica Bohemica, 139 (2014), special issue dedicated to Equadiff 13, no. 2, 381-389. |
[40] | N. Yamazaki, Global attractors for non-autonomous phase-field systems of grain boundary motion with constraint. Adv. Math. Sci. Appl. 23 (2013), no. 1, 267-296. |
method | SRF | rmse | rrmse |
ours | CIE | 0.029766 | 0.069987 |
Measured | 0.037874 | 0.098941 | |
Generated | 0.038611 | 0.12115 | |
[1] | CIE | 0.052838 | 0.1111 |
Measured | 0.04472 | 0.15981 | |
Generated | 0.08846 | 0.16791 | |
rmse=root mean squared error; rrmse=relative root mean squared error. |
method | SRF | rmse | rrmse |
ours | CIE | 0.029766 | 0.069987 |
Measured | 0.037874 | 0.098941 | |
Generated | 0.038611 | 0.12115 | |
[1] | CIE | 0.052838 | 0.1111 |
Measured | 0.04472 | 0.15981 | |
Generated | 0.08846 | 0.16791 | |
rmse=root mean squared error; rrmse=relative root mean squared error. |