
In this study, we introduce a family of hypersurfaces of revolution characterized by six parameters in the seven-dimensional pseudo-Euclidean space E73. These hypersurfaces exhibit intriguing geometric properties, and our aim is to analyze them in detail. To begin, we calculate the matrices corresponding to the fundamental form, Gauss map, and shape operator associated with this hypersurface family. These matrices provide essential information about the local geometry of the hypersurfaces, including their curvatures and tangent spaces. Using the Cayley-Hamilton theorem, we employ matrix algebra techniques to determine the curvatures of the hypersurfaces. This theorem allows us to express the characteristic polynomial of a matrix in terms of the matrix itself, enabling us to compute the curvatures effectively. In addition, we establish equations that describe the interrelation between the mean curvature and the Gauss-Kronecker curvature of the hypersurface family. These equations provide insights into the geometric behavior of the surfaces and offer a deeper understanding of their intrinsic properties. Furthermore, we investigate the relationship between the Laplace-Beltrami operator, a differential operator that characterizes the geometry of the hypersurfaces, and a specific 7×7 matrix denoted as A. By studying this relation, we gain further insights into the geometric structure and differential properties of the hypersurface family. Overall, our study contributes to the understanding of hypersurfaces of revolution in E73, offering mathematical insights and establishing connections between various geometric quantities and operators associated with this family.
Citation: Yanlin Li, Erhan Güler. Hypersurfaces of revolution family supplying Δr=Ar in pseudo-Euclidean space E73[J]. AIMS Mathematics, 2023, 8(10): 24957-24970. doi: 10.3934/math.20231273
[1] | Shahbaz Ahmad, Adel M. Al-Mahdi, Rashad Ahmed . Two new preconditioners for mean curvature-based image deblurring problem. AIMS Mathematics, 2021, 6(12): 13824-13844. doi: 10.3934/math.2021802 |
[2] | Suparat Kesornprom, Prasit Cholamjiak . A modified inertial proximal gradient method for minimization problems and applications. AIMS Mathematics, 2022, 7(5): 8147-8161. doi: 10.3934/math.2022453 |
[3] | Damrongsak Yambangwai, Tanakit Thianwan . An efficient iterative algorithm for common solutions of three G-nonexpansive mappings in Banach spaces involving a graph with applications to signal and image restoration problems. AIMS Mathematics, 2022, 7(1): 1366-1398. doi: 10.3934/math.2022081 |
[4] | Shahbaz Ahmad, Faisal Fairag, Adel M. Al-Mahdi, Jamshaid ul Rahman . Preconditioned augmented Lagrangian method for mean curvature image deblurring. AIMS Mathematics, 2022, 7(10): 17989-18009. doi: 10.3934/math.2022991 |
[5] | Xiao Guo, Chuanpei Xu, Zhibin Zhu, Benxin Zhang . Nonmonotone variable metric Barzilai-Borwein method for composite minimization problem. AIMS Mathematics, 2024, 9(6): 16335-16353. doi: 10.3934/math.2024791 |
[6] | Bolin Song, Zhenhao Shuai, Yuanyuan Si, Ke Li . Blind2Grad: Blind detail-preserving denoising via zero-shot with gradient regularized loss. AIMS Mathematics, 2025, 10(6): 14140-14166. doi: 10.3934/math.2025637 |
[7] | Ziqing Du, Yaru Li, Guangming Lv . Evaluating the nonlinear relationship between nonfinancial corporate sector leverage and financial stability in the post crisis era. AIMS Mathematics, 2022, 7(11): 20178-20198. doi: 10.3934/math.20221104 |
[8] | H. M. Barakat, Magdy E. El-Adll, M. E. Sobh . Bootstrapping m-generalized order statistics with variable rank. AIMS Mathematics, 2022, 7(8): 13704-13732. doi: 10.3934/math.2022755 |
[9] | Chih-Hung Chang, Ya-Chu Yang, Ferhat Şah . Reversibility of linear cellular automata with intermediate boundary condition. AIMS Mathematics, 2024, 9(3): 7645-7661. doi: 10.3934/math.2024371 |
[10] | Shabir Ahmad, Saud Fahad Aldosary, Meraj Ali Khan . Stochastic solitons of a short-wave intermediate dispersive variable (SIdV) equation. AIMS Mathematics, 2024, 9(5): 10717-10733. doi: 10.3934/math.2024523 |
In this study, we introduce a family of hypersurfaces of revolution characterized by six parameters in the seven-dimensional pseudo-Euclidean space E73. These hypersurfaces exhibit intriguing geometric properties, and our aim is to analyze them in detail. To begin, we calculate the matrices corresponding to the fundamental form, Gauss map, and shape operator associated with this hypersurface family. These matrices provide essential information about the local geometry of the hypersurfaces, including their curvatures and tangent spaces. Using the Cayley-Hamilton theorem, we employ matrix algebra techniques to determine the curvatures of the hypersurfaces. This theorem allows us to express the characteristic polynomial of a matrix in terms of the matrix itself, enabling us to compute the curvatures effectively. In addition, we establish equations that describe the interrelation between the mean curvature and the Gauss-Kronecker curvature of the hypersurface family. These equations provide insights into the geometric behavior of the surfaces and offer a deeper understanding of their intrinsic properties. Furthermore, we investigate the relationship between the Laplace-Beltrami operator, a differential operator that characterizes the geometry of the hypersurfaces, and a specific 7×7 matrix denoted as A. By studying this relation, we gain further insights into the geometric structure and differential properties of the hypersurface family. Overall, our study contributes to the understanding of hypersurfaces of revolution in E73, offering mathematical insights and establishing connections between various geometric quantities and operators associated with this family.
Image blur, such as motion blur, is a common disturbance in real-world photography applications. Therefore, image deblurring is of great importance for further practical vision tasks. Motion blur can be modeled as the covolution of the sharp image and blur kernel, which is typically unknown in real-world scenarios. The image degradation can be modeled as:
B=L⊗K+n, | (1.1) |
where B, L, and K denote the motion-blurred image, the sharp image, and the blur kernel (point spread function), respectively, and n represents the additive white Gaussian noise with a mean of 0 and a standard deviation of σ, which is introduced during the image degradation process. The symbol ⊗ denotes the convolution operator.
Blind deblurring aims to reconstruct both the blur kernel K and the sharp latent image L from a blurred input image B. However, this process is ill-posed because different combinations of L and K can produce identical outputs of B. To address this problem, it is essential to incorporate prior knowledge to avoid the local optimal solution.
Researchers have extensively explored the optimization of blur kernels modeled with prior knowledge of images in recent years[1,2,3]. Li et al. [4] utilized a deep network to formulate the image prior as a binary classifier. Levin et al. [5] employed hyper-Laplacian priors to model the latent image and derived a simple approximation method to optimize the maximum a posteriori (MAP). In the pursuit of developing an efficient blind deblurring method, various prior terms tailored to enhance image clarity have been integrated within the MAP framework[6,7,8]. Krishnan et al. [9] utilized an L1/L2 regularization scheme to sparsely represent the gradient image, whose main feature is to adapt the L1 norm regularization by using the L2 norm of the image gradient as the weight in the iterative process. However, this approach is not conducive to recovering image details in the early stages of the optimization process. Meanwhile, Xu et al. [10] proposed an unnatural L0 norm sparse representation to eliminate detrimental small-amplitude structures, providing a unified framework for both uniform and non-uniform motion deblurring. Liu et al. [11] explored that the surface maps of intermediate latent images containing detrimental structures typically have a large surface area, and they introduced an additional surface-perception a prior based on the use of the L0 norm to enforce sparsity on the image gradient, thereby preserving sharp edges and removing unfavorable microstructures from the intermediate latent images.
These methods still fail when dealing with images containing more saturated pixels and large blur kernels. Therefore, recent works concentrate on the image reconstruction with outliers for non-blind deblurring[12] and blind deblurring tasks[13,14,15]. Chen et al. [16] proposed to remove the outliers by adopting a confidence map and further shrunk the outliers by multiplying with its inverse value[17]. Zhang et al. [18] proposed an intermediate image correction method for saturated pixels to improve the quality of saturated image restoration by screening the intermediate image using Bayesian a posteriori estimation and excluding pixel points that adversely affect the blur kernel estimation. Much progress has been made in blurred image estimation for natural images and in image reconstruction techniques, but there are still several major problems with the current blind deblurring algorithms. First, most current motion blur estimation methods are based on images with a linear blurring process[19,20,21]. In practice, blurred images are often accompanied by large noise and outlier points, such as saturated pixel points, and linear blur models cannot effectively describe saturated pixel points, leading to their poor performance in processing blurred images with outlier pixels. In particular, blurred images taken in low-light environments will contain large noise and outlier points. Therefore, how to effectively cope with the interference caused by saturated pixels has great practical value.
Recently, deep learning methods based on Bayes theory have also developed[22,23,24]. Kingma et al. [22] proposed the auto-encoding variational Bayesian algorithm, where the encoder maps the input into a distribution within the latent space, and the decoder maps the sample points from the latent space to the input space. Zhang et al. [20] and Ren et al. [23] constructed blind deblurring networks based on the MAP estimation. However, these deep learning-based methods can easily fail when the data distribution is different from the training data. For this reason, the proposed method focuses on the conventional iterative blind deblurring method.
This work investigates the blind deblurring optimization model for saturated pixels established under the MAP framework. By solving the intermediate image and blur kernel by alternating iterations, the blur kernel will eventually converge to the blur kernel of the observed image. In order to overcome the highly ill-posed problem of blind deblurring, the image regularity and the blur kernel regularity are usually used to constrain the model. Although the dark channel prior (DCP) has achieved excellent results, when dealing with images with larger blur kernels or saturated pixels, the results are often unsatisfactory. Therefore, we utilize the pixel screening strategy [18] to further correct the intermediate images with large blur kernels or saturated pixels. By distinguishing whether a pixel conforms to the linear degradation assumption, the proposed method reduces the influence of unfavorable structure to obtain a more accurate blur kernel.
We use the MAP probability estimation to construct a probabilistic modeling framework between a sharp image, a blur kernel, and a blurred image. Given the blurred image, the sharp image and the blur kernel are estimated by maximizing a posterior probability based on the assumption that the sharp image L and the blur kernel K are independent of each other. According to the conditional probability formula, we obtain
(L,K)=argmaxL,KP(L,K∣B)=argmaxL,KP(B∣L,K)∗P(L)∗P(K)P(B). | (2.1) |
Taking the negative logarithms on both sides of the above equation, we derive a new form that is equivalent to the original probability density function:
−logP(L,K∣B)∝−logP(B∣L,K)−logP(L)−logP(K). | (2.2) |
Assume that n is an additive white Gaussian noise with a mean of 0 and a variance of σ, and the variable B follows a normal distribution, provided that L and K are known. The solution of L and K is transformed into the following minimization problem:
(L,K)=argminL,K‖L⊗K−B‖22+Φ(L)+Ψ(K). | (2.3) |
The first term on the right-hand side is the data fitting term, and the second and third terms are regularization terms, which involve a priori knowledge, including statistical and distribution properties about the sharp image and blur kernel. Blind deblurring is to estimate the blur kernel and then recover the sharp image from the blurred image.
The motion blur is usually caused by relative motion between the camera and the subject. This motion causes pixels shifting in a specific direction and distance, thus resulting in image degradation. Assume all values in the blur kernel are greater than or equal to 0, and the sum is 1, that is,
K(z)≥0, ∑z∈ΩkK(z)=1, |
where Ω is the whole image space.
Since blur kernels are sparse, we constrain the possible blur kernels as follows:
Ψ(K)=‖K‖p, | (2.4) |
where ‖⋅‖p denotes the p norm operator. As the L2 norm constraint focuses more on the smoothness of the blur kernel, this leads to more stable results for kernel estimation. Therefore, we use the L2 norm to constrain the blur kernel in this paper.
The dark channel is a natural metric for distinguishing sharp images from blurry images[25]. He et al. [26] first proposed dark channels for image haze removal. The dark channel of image L can be defined as the minimum value of an image patch as follows:
Ci,j(L)=minN(i,j)(minc∈{r,g,b}Lci,j), | (2.5) |
where N(i,j) is the image patch centered at pixel (i,j). Experiments show that the dark channels of sharp images are more sparse. The possible reason is that the image blur is a weighted sum of pixel values within the local neighborhood, thereby increasing the dark channel pixels. Therefore, we use the L0 norm of the dark channel as the image regularization.
The deblurring model based on the DCP is to solve the following problem:
minL,K‖L⊗K−B‖22+λ‖D(L)‖0+μ‖∇L‖0+γ‖K‖22. | (2.6) |
The first term of this formula is a fidelity item that constrains the output of the convolution of the recovered image with the blur kernel to be as similar as possible to the observed result. The ‖∇L‖0 term is used to preserve large image gradients and ‖D(L)‖0 is used to measure the dark channel sparsity. The blind deconvolution method commonly optimizes L and K alternately during the iterative process. The main purpose of this alternating iterative optimization is to progressively refine the motion blur kernel K and the latent image L.
In this work, the following two subproblems are solved by the alternating iteration method:
minL‖L⊗K−B‖22+λ‖D(L)‖0+μ‖∇L‖0, minK‖L⊗K−B‖22+γ‖K‖22. | (2.7) |
Specifically, for the k-th iterative process, L can be solved using the fast Fourier transform. When L is given, kernel estimation in Eq (2.7) is a least-squares problem. The gradient-based kernel estimation methods have shown superiority [11], and the kernel estimation model as follows:
Kk+1=argminK‖∇Lk+1⊗Kk−∇B‖22+γ‖Kk‖22. | (2.8) |
Normally, blind image deblurring follows the basic linear blurring assumption Eq (1.1). However, methods based on this assumption do not yield satisfactory results in recovering images with a high number of saturated pixels. When outliers are present, intermediate potential images, estimated using methods with traditional data fidelity items, contain significant artifacts. Even a small number of outliers severely degrade the quality of the estimated blur kernel because these outliers often do not fit the linear model.
An effective way to identify and discard outliers during the iterative process is assigning different weights to the pixels while updating the latent image and the blur kernel. Those pixels categorized as outliers are assigned a weight equal to zero to ensure that they do not affect the subsequent iterations[18]. We introduce variable Z to determine whether the pixel (i,j) complies with the linearity assumption[12], and the intermediate correction operator is defined as
Pk+1i,j=P(Zk+1i,j=1∣Bi,j,Kk,Lk+1). | (2.9) |
According to the Bayes formula, we have
P(Zk+1ij=1∣Bij,Kk,Lk+1)=P(Bij∣Zk+1ij=1,Kk,Lk+1)P(Zk+1ij=1∣Kk,Lk+1)P(Bij∣Kk,Lk+1). | (2.10) |
In this work, we assume that the noise n obeys a Gaussian distribution with a mean of 0 and a variance σ2. When
Zk+1ij=1, |
the degradation assumption holds, and we obtain
P(Bij∣Zk+1ij=1,Kk,Lk+1)=φij, | (2.11) |
where
φij∼N((Lk+1∗Kk)ij,σ2). |
When
Zk+1ij=0, |
pixel (i,j) is considered an outlier. The posterior distribution is approximated by a uniform distribution as follows:
P(Bij∣Zk+1ij=0,Kk,Lk+1)=1/(b−a), | (2.12) |
where b and a correspond to the maximum and minimum values of the input image, respectively.
Given the intermediate image Lk+1 and kernel Kk, we use p0 to represent the percentage of image pixels that deviate from the linear model. The probability of a pixel conforming to Eq (1.1) can be defined as
P(Zk+1ij=0∣Kk,Lk+1)=p0, | (2.13) |
and we generally assume that about 0–10 % of the pixels are deviated. The probability of satisfying the linearity assumption Eq (1.1) for a given intermediate blur kernel and intermediate image is defined as
P(Zk+1ij=1∣Kk,Lk+1)=1−p0. | (2.14) |
According to the full probability formula, we obtain
P(Bij∣Kk,Lk+1)=∑Zij=0,1P(Bij∣Zk+1ij,Kk,Lk+1)P(Zk+1ij∣Kk,Lk+1)=φij(1−p0)+p0(b−a). | (2.15) |
Thus, with the above definitions, the pixel screening operator P is calculated as follows:
Pk+1i,j=φij(1−p0)φij(1−p0)+p0/(b−a). | (2.16) |
During the iterative process, after obtaining the estimated intermediate image, we alternate the estimation of the blurring kernel. Based on the intermediate correction operator, we screen and correct the pixels of intermediate images. For those pixels with a high probability of deviation, which means they have greater disadvantageous impact for blur kernel estimation, they are appropriately corrected. With the corrected intermediate image, we solve the following model to estimate the blur kernel:
Kk+1=argminK‖∇(Lk+1∘P)⊗K−∇B‖22+γ‖K‖22, | (2.17) |
where ∘ is the matrix dot product operator.
As shown in Figure 1, this work is carried out in the framework of multi-scale deblurring, where kernel estimation is carried out from a coarse to a fine scheme with an image pyramid. With the color input image, we first transform it to a grayscale image. We use the image to create a pyramid and resize the blur kernel with a down-sampling operation, thus a set of multi-resolution images is obtained. Starting from the smallest level, the structure of the whole image is restored and we recover the rough blur kernel using the correction operator. As the image and kernel resolution improve, the finer details are gradually restored.
In order to verify the effectiveness of this method, we conduct numerical experiments on both synthetic and real-world image datasets to compare the processing effects of the dark channel blind deblurring method before and after the correction improvement. We set the parameters
λ=0.003,μ=0.003,andγ=2, |
and p is an adjustable parameter in the range of 0.02 to 0.1. Figure 2 compares the results on the Levin dataset[5] by adjusting p from 0.02 to 0.16. The results show that the deblurring performance relies on the choice of p. The more outliers present, the higher the value of p, which will provide better results.
The experimental hardware configuration is an Intel Core i5-10300 CPU, NVIDlA GeForce GTX 1650 GPU, 16.0 GB RAM, the software configuration operating system is Windows 10 (64-bit). We use the PSNR and structural similarity (SSIM) as our evaluation metrics.
We use the Levin dataset [5] and Köhler dataset [27] to evaluate our method. The Levin dataset is a standard benchmark dataset that consists of 32 blurred images synthesized from 4 original images and 8 different convolution kernels, with each image size of 255×255. The Köhler dataset is a standard benchmark dataset that consists of 48 blurred images synthesized from 4 original images and 12 different convolutional kernels, with each image size of 800×800. We compare our method with DCP[25], PMP[28], LMG[21], and Sat[17] to demonstrate the effectiveness of our method.
In Figure 3, the left figure shows the PSNR comparison between the proposed method and state-of-the-art methods, where our method significantly improves the PSNR metric. The right figure shows the error ratio comparison with and without intermediate correction. It can be seen that the proposed method has the smallest error ratio. As shown in Figure 3 and Table 1, experimental results on the Levin dataset demonstrate that the deblurring algorithm proposed in this paper achieves significant performance improvements across a wide range of blur types and degrees. The improved method recovery results obtain higher PSNR and SSIM values, and its ability to reach a 100% success rate faster proves its effectiveness in removing blur of different types and degrees.
Figure 4 shows that our method recovers the image and kernel with less artifacts and higher quality.
As shown in Figure 5 and Table 2, experimental results on the Köhler dataset show that our proposed deblurring method achieves significant performance improvement, and the recovery results obtain higher PSNR and SSIM values, demonstrating its effectiveness for image quality improvement. Figure 6 shows that the deblurred image of the proposed method obtains most restoration performance with the least ringing artifacts. The restored kernel of the proposed method is more clean and the image has the best visual quality.
As shown in Figure 7, we compare the dark channels of intermediate results with and without the intermediate correction. Without the correction strategy, our method reduces to the DCP-based method[25]. The intermediate results show that our method restores more sharp edges and clear blurry kernels. The final recovered image contains more details that demonstrate that our method improves the deblurring quality for saturated images.
Estimating motion kernels from blurred images with saturated pixel regions is challenging in image processing. As shown in Figure 8, we present three blurry images with saturated pixels to demonstrate the performance of our method. The first column shows blurry images, and the second and the third column are the results of the DCP[25] and ours, respectively. The results show that with the intermediate correction, it not only improves the quality of the recovered images, but also restores more clear blurry trajectories.
In this work, we introduce a blind deblurring method based on the DCP with an intermediate image correction strategy. In order to remove the disadvantageous effect of outliers, such as saturated pixels, we correct the intermediate image during the deblurring process. By assigning different weights to intermediate images, we improve the kernel estimation performance and thus enhance the final image restoration quality. Experimental results show that our method can significantly improve the accuracy and robustness of blur estimation when dealing with blurred images containing noise and outlier pixels.
Min Xiao: writing—original draft; Jinkang Zhang: writing—original draft; Zijin Zhu: writing—review and editing; Meina Zhang: methodology, supervision. All authors have read and agreed to the published version of the manuscript.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This work is supported by the Science Foundation of China University of Petroleum, Beijing (No. 2462023YJRC008), Foundation of National Key Laboratory of Computational Physics (No. 6142A05QN23005), Postdoctoral Fellowship Program of CPSF (Nos. GZC20231997 and 2024M752451), National Natural Science Foundation of China (No. 62372467).
The authors have no conflicts to disclose.
[1] |
L. J. Alias, N. Gürbüz, An extension of Takashi theorem for the linearized operators of the highest order mean curvatures, Geometriae Dedicata, 121 (2006), 113–127. https://doi.org/10.1007/s10711-006-9093-9 doi: 10.1007/s10711-006-9093-9
![]() |
[2] | Y. Aminov, The geometry of submanifolds, Amsterdam: Gordon and Breach Sci. Pub., 2001. |
[3] |
K. Arslan, B. K. Bayram, B. Bulca, Y. H. Kim, C. Murathan, G. Öztürk, Vranceanu surface in E4 with pointwise 1-type Gauss map, Indian J. Pure Appl. Math., 42 (2011), 41–51. https://doi.org/10.1007/s13226-011-0003-y doi: 10.1007/s13226-011-0003-y
![]() |
[4] |
K. Arslan, B. K. Bayram, B. Bulca, G. Öztürk, Generalized rotation surfaces in E4, Results Math., 61 (2012), 315–327. https://doi.org/10.1007/s00025-011-0103-3 doi: 10.1007/s00025-011-0103-3
![]() |
[5] |
K. Arslan, B. Bulca, B. Kılıç, Y. H. Kim, C. Murathan, G. Öztürk, Tensor product surfaces with pointwıse 1-type Gauss map, Bull. Korean Math. Soc., 48 (2011), 601–609. https://doi.org/10.4134/BKMS.2011.48.3.601 doi: 10.4134/BKMS.2011.48.3.601
![]() |
[6] |
K. Arslan, V. Milousheva, Meridian surfaces of elliptic or hyperbolic type with pointwise 1-type Gauss map in Minkowski 4-space, Taiwan. J. Math., 20 (2016), 311–332. https://doi.org/10.11650/tjm.20.2016.5722 doi: 10.11650/tjm.20.2016.5722
![]() |
[7] | K. Arslan, A. Sütveren, B. Bulca, Rotational λ -hypersurfaces in Euclidean spaces, Creat. Math. Inform., 30 (2021), 29–40. |
[8] |
A. Arvanitoyeorgos, G. Kaimakamis, M. Magid, Lorentz hypersurfaces in E41 satisfying ΔH=αH, Illinois J. Math., 53 (2009), 581–590. https://doi.org/10.1215/IJM/1266934794 doi: 10.1215/IJM/1266934794
![]() |
[9] |
M. Barros, B. Y. Chen, Stationary 2-type surfaces in a hypersphere, J. Math. Soc. Jap., 39 (1987), 627–648. https://doi.org/10.2969/jmsj/03940627 doi: 10.2969/jmsj/03940627
![]() |
[10] |
M. Barros, O. J. Garay, 2-type surfaces in S3, Geometriae Dedicata, 24 (1987), 329–336. https://doi.org/10.1007/BF00181605 doi: 10.1007/BF00181605
![]() |
[11] | B. Y. Chen, On submanifolds of finite type, Soochow J. Math., 9 (1983), 65–81. |
[12] | B. Y. Chen, Total mean curvature and submanifolds of finite type, Singapore: World Scientific, 1984. |
[13] | B. Y. Chen, Finite type submanifolds and generalizations, Rome: University of Rome, 1985. |
[14] |
B. Y. Chen, Finite type submanifolds in pseudo-Euclidean spaces and applications, Kodai Math. J., 8 (1985), 358–374. https://doi.org/10.2996/kmj/1138037104 doi: 10.2996/kmj/1138037104
![]() |
[15] |
B. Y. Chen, P. Piccinni, Submanifolds with finite type Gauss map, Bull. Austral. Math. Soc., 35 (1987), 161–186. https://doi.org/10.1017/S0004972700013162 doi: 10.1017/S0004972700013162
![]() |
[16] |
B. Y. Chen, E. Güler, Y. Yaylı, H. H. Hacısalihoǧlu, Differential geometry of 1-type submanifolds and submanifolds with 1-type Gauss map, Int. Elec. J. Geom., 16 (2023), 4–49. https://doi.org/10.36890/iejg.1216024 doi: 10.36890/iejg.1216024
![]() |
[17] |
Q. M. Cheng, Q. R. Wan, Complete hypersurfaces of R4 with constant mean curvature, Monatsh. Math., 118 (1994), 171–204. https://doi.org/10.1007/BF01301688 doi: 10.1007/BF01301688
![]() |
[18] |
S. Y. Cheng, S. T. Yau, Hypersurfaces with constant scalar curvature, Math. Ann., 225 (1977), 195–204. https://doi.org/10.1007/BF01425237 doi: 10.1007/BF01425237
![]() |
[19] | M. Choi, Y. H. Kim, Characterization of the helicoid as ruled surfaces with pointwise 1-type Gauss map, Bull. Korean Math. Soc., 38 (2001), 753–761. |
[20] |
F. Dillen, J. Pas, L. Verstraelen, On surfaces of finite type in Euclidean 3-space, Kodai Math. J., 13 (1990), 10–21. https://doi.org/10.2996/kmj/1138039155 doi: 10.2996/kmj/1138039155
![]() |
[21] |
M. Do Carmo, M. Dajczer, Rotation hypersurfaces in spaces of constant curvature, Trans. Amer. Math. Soc., 277 (1983), 685–709. https://doi.org/10.1090/S0002-9947-1983-0694383-X doi: 10.1090/S0002-9947-1983-0694383-X
![]() |
[22] |
U. Dursun, Hypersurfaces with pointwise 1-type Gauss map, Taiwan. J. Math., 11 (2007), 1407–1416. https://doi.org/10.11650/twjm/1500404873 doi: 10.11650/twjm/1500404873
![]() |
[23] | A. Ferrandez, O. J. Garay, P. Lucas, On a certain class of conformally at Euclidean hypersurfaces, In Global Analysis and Global Differential Geometry, Springer: Berlin, Germany, 1990, 48–54. |
[24] |
G. Ganchev, V. Milousheva, General rotational surfaces in the 4-dimensional Minkowski space, Turkish J. Math., 38 (2014), 883–895. https://doi.org/10.3906/mat-1312-10 doi: 10.3906/mat-1312-10
![]() |
[25] |
O. J. Garay, On a certain class of finite type surfaces of revolution, Kodai Math. J., 11 (1988), 25–31. https://doi.org/10.2996/kmj/1138038815 doi: 10.2996/kmj/1138038815
![]() |
[26] |
O. J. Garay, An extension of Takahashi's theorem, Geometriae Dedicata, 34 (1990), 105–112. https://doi.org/10.1007/BF00147319 doi: 10.1007/BF00147319
![]() |
[27] |
E. Güler, Fundamental form IV and curvature formulas of the hypersphere, Malaya J. Mat., 8 (2020), 2008–2011. https://doi.org/10.26637/MJM0804/0116 doi: 10.26637/MJM0804/0116
![]() |
[28] |
E. Güler, Rotational hypersurfaces satisfying ΔIR=AR in the four-dimensional Euclidean space, J. Polytech., 24 (2021), 517–520. https://doi.org/10.2339/POLITEKNIK.670333 doi: 10.2339/POLITEKNIK.670333
![]() |
[29] |
E. Güler, H. H. Hacısalihoǧlu, Y. H. Kim, The Gauss map and the third Laplace-Beltrami operator of the rotational hypersurface in 4-space, Symmetry, 10 (2018), 1–12. https://doi.org/10.3390/sym10090398 doi: 10.3390/sym10090398
![]() |
[30] |
E. Güler, M. Magid, Y. Yaylı, Laplace -Beltrami operator of a helicoidal hypersurface in four-space, J. Geom. Symmetry. Phys., 41 (2016), 77–95. https://doi.org/10.7546/jgsp-41-2016-77-95 doi: 10.7546/jgsp-41-2016-77-95
![]() |
[31] | E. Güler, Y. Yaylı, H. H. Hacısalihoǧlu, Bi-rotational hypersurface with Δx=Ax in 4-space, Facta Universitatis (Nis) Ser. Math. Inform., 37 (2022), 917–928. |
[32] | E. Güler, Y. Yaylı, H. H. Hacısalihoǧlu, Bi-rotational hypersurface and the second Laplace-Beltrami operator in the four dimensional Euclidean space E4, Turkish J. Math., 46 (2022), 2167–2177. https://doi.org/10.55730/1300-0098.3261 |
[33] | E. Güler, Y. Yaylı, H. H. Hacısalihoǧlu, Bi-rotational hypersurface satisfying ΔIIIx=Ax in 4-space, Honam Math. J., 44 (2022), 219–230. |
[34] | E. Güler, Y. Yaylı, H. H. Hacısalihoǧlu, Bi-rotational hypersurface satisfying Δx=Ax in pseudo-Euclidean space E42, TWMS J. Pure Appl. Math., Preprint. |
[35] |
T. Hasanis, T. Vlachos, Hypersurfaces in E4 with harmonic mean curvature vector field, Math. Nachr., 172 (1995), 145–169. https://doi.org/10.1002/mana.19951720112 doi: 10.1002/mana.19951720112
![]() |
[36] |
D. S. Kim, J. R. Kim, Y. H. Kim, Cheng-Yau operator and Gauss map of surfaces of revolution, Bull. Malays. Math. Sci. Soc., 39 (2016), 1319–1327. https://doi.org/10.1007/s40840-015-0234-x doi: 10.1007/s40840-015-0234-x
![]() |
[37] | W. Kühnel, Differential geometry. Curves-surfaces-manifolds, 3 Eds., Translated from the 2013 German ed. AMS, Providence, RI, 2015. |
[38] | T. Levi-Civita, Famiglie di superficie isoparametriche nellordinario spacio euclideo, Rend. Acad. Lincei, 26 (1937), 355–362. |
[39] |
Y. Li, E. Güler, A hypersurfaces of revolution family in the five-dimensional Pseudo-Euclidean space E52, Mathematics, 11 (2023), 3427. https://doi.org/10.3390/math11153427 doi: 10.3390/math11153427
![]() |
[40] |
C. Moore, Surfaces of rotation in a space of four dimensions, Ann. Math., 21 (1919), 81–93. https://doi.org/10.2307/2007223 doi: 10.2307/2007223
![]() |
[41] | C. Moore, Rotation surfaces of constant curvature in space of four dimensions, Bull. Amer. Math. Soc., 26 (1920), 454–460. |
[42] | S. Stamatakis, H. Zoubi, Surfaces of revolution satisfying ΔIIIx=Ax, J. Geom. Graph., 14 (2010), 181–186. |
[43] |
T. Takahashi, Minimal immersions of Riemannian manifolds, J. Math. Soc. Japan, 18 (1966), 380–385. https://doi.org/10.2969/jmsj/01840380 doi: 10.2969/jmsj/01840380
![]() |
[44] | D. W. Yoon, Some properties of the Clifford torus as rotation surfaces, Indian J. Pure Appl. Math., 34 (2003), 907–915. |