Fish's Responses | Outcomes (Left side) | Outcomes (Right side) | Events |
Reinforcement | η1x | η1x+1−η1 | ERE1 |
Non-reinforcement | η2x+1−η2 | η2x | ERE2 |
The model of decision practice reflects the evolution of moral judgment in mathematical psychology, which is concerned with determining the significance of different options and choosing one of them to utilize. Most studies on animals behavior, especially in a two-choice situation, divide such circumstances into two events. Their approach to dividing these behaviors into two events is mainly based on the movement of the animals towards a specific choice. However, such situations can generally be divided into four events depending on the chosen side and placement of the food. This article aims to fill such gaps by proposing a generic stochastic functional equation that can be used to describe several psychological and learning theory experiments. The existence, uniqueness, and stability analysis of the suggested stochastic equation are examined by utilizing the notable fixed point theory tools. Finally, we offer two examples to substantiate our key findings.
Citation: Ali Turab, Wajahat Ali, Choonkil Park. A unified fixed point approach to study the existence and uniqueness of solutions to the generalized stochastic functional equation emerging in the psychological theory of learning[J]. AIMS Mathematics, 2022, 7(4): 5291-5304. doi: 10.3934/math.2022294
[1] | Ridha Dida, Hamid Boulares, Bahaaeldin Abdalla, Manar A. Alqudah, Thabet Abdeljawad . On positive solutions of fractional pantograph equations within function-dependent kernel Caputo derivatives. AIMS Mathematics, 2023, 8(10): 23032-23045. doi: 10.3934/math.20231172 |
[2] | Wedad Albalawi, Muhammad Imran Liaqat, Fahim Ud Din, Kottakkaran Sooppy Nisar, Abdel-Haleem Abdel-Aty . Significant results in the $ \mathrm{p} $th moment for Hilfer fractional stochastic delay differential equations. AIMS Mathematics, 2025, 10(4): 9852-9881. doi: 10.3934/math.2025451 |
[3] | Waleed M. Alfaqih, Abdullah Aldurayhim, Mohammad Imdad, Atiya Perveen . Relation-theoretic fixed point theorems under a new implicit function with applications to ordinary differential equations. AIMS Mathematics, 2020, 5(6): 6766-6781. doi: 10.3934/math.2020435 |
[4] | Xianying Huang, Yongkun Li . Besicovitch almost periodic solutions for a stochastic generalized Mackey-Glass hematopoietic model. AIMS Mathematics, 2024, 9(10): 26602-26630. doi: 10.3934/math.20241294 |
[5] | Yanli Ma, Hamza Khalil, Akbar Zada, Ioan-Lucian Popa . Existence theory and stability analysis of neutral $ \psi $–Hilfer fractional stochastic differential system with fractional noises and non-instantaneous impulses. AIMS Mathematics, 2024, 9(4): 8148-8173. doi: 10.3934/math.2024396 |
[6] | Amjad Ali, Kamal Shah, Dildar Ahmad, Ghaus Ur Rahman, Nabil Mlaiki, Thabet Abdeljawad . Study of multi term delay fractional order impulsive differential equation using fixed point approach. AIMS Mathematics, 2022, 7(7): 11551-11580. doi: 10.3934/math.2022644 |
[7] | Ugyen Samdrup Tshering, Ekkarath Thailert, Sotiris K. Ntouyas . Existence and stability results for a coupled system of Hilfer-Hadamard sequential fractional differential equations with multi-point fractional integral boundary conditions. AIMS Mathematics, 2024, 9(9): 25849-25878. doi: 10.3934/math.20241263 |
[8] | M. Syed Ali, M. Hymavathi, Bandana Priya, Syeda Asma Kauser, Ganesh Kumar Thakur . Stability analysis of stochastic fractional-order competitive neural networks with leakage delay. AIMS Mathematics, 2021, 6(4): 3205-3241. doi: 10.3934/math.2021193 |
[9] | Hasanen A. Hammad, Doha A. Kattan . Strong tripled fixed points under a new class of F-contractive mappings with supportive applications. AIMS Mathematics, 2025, 10(3): 5785-5805. doi: 10.3934/math.2025266 |
[10] | Songkran Pleumpreedaporn, Chanidaporn Pleumpreedaporn, Weerawat Sudsutad, Jutarat Kongson, Chatthai Thaiprayoon, Jehad Alzabut . On a novel impulsive boundary value pantograph problem under Caputo proportional fractional derivative operator with respect to another function. AIMS Mathematics, 2022, 7(5): 7817-7846. doi: 10.3934/math.2022438 |
The model of decision practice reflects the evolution of moral judgment in mathematical psychology, which is concerned with determining the significance of different options and choosing one of them to utilize. Most studies on animals behavior, especially in a two-choice situation, divide such circumstances into two events. Their approach to dividing these behaviors into two events is mainly based on the movement of the animals towards a specific choice. However, such situations can generally be divided into four events depending on the chosen side and placement of the food. This article aims to fill such gaps by proposing a generic stochastic functional equation that can be used to describe several psychological and learning theory experiments. The existence, uniqueness, and stability analysis of the suggested stochastic equation are examined by utilizing the notable fixed point theory tools. Finally, we offer two examples to substantiate our key findings.
Mathematical psychology is a subfield of psychology that focuses on mathematical modeling of visual, intellectual, behavioral, and physical processes and the formulation of law-like principles that link measurable functional attributes to quantitative behavior. Mathematical techniques are utilized to generate more trustworthy theories, which result in more scientifically rigorous validations. The primary difficulty with today's and most likely future applications of mathematics to psychological issues is modeling these problems.
The learning process in human beings or animals may be viewed as a chain of responses between many potential choices. Even in repeated tests conducted under well-controlled circumstances, preference sequences are often unexpected, suggesting that chance determines response selection. Thus, it is beneficial to consider the systemic changes in a series of choices that correspond to variations in response probability from trial to trial. From this perspective, all learning research is devoted to understanding the probability of trial-to-trial occurrences that define a stochastic process.
Recent studies on mathematical psychology have shown that the behavior of a basic learning experiment follows a stochastic model. It is not a novel concept (for a history of the idea, see [1]). Following 1950, two critical characteristics emerge mainly from Bush, Estes, and Mosteller's study. Firstly, one of the most critical characteristics of the proposed models is the inclusive nature of the learning process. Second, such models may be evaluated in such a manner that their statistical properties are revealed.
In psychological learning theory, the solutions to the subsequent stochastic equation have a great importance
L(x)=xL(ν1+(1−ν1)x)+(1−x)L((1−ν2)x), | (1.1) |
for all x∈V=[0,1],0<ν1≤ν2<1 are learning-rate parameters and L:V→R is an unknown function. Markov transitions were used to describe such behavior and converting the states by P(x→ν1+(1−ν1)x) and P(1−x→(1−ν2)x), where P is the probability of that specific event.
In 1976, Istrǎţescu [2] used the above stochastic equation (1.1) to inspect the involvement of predatory animals that prey two distinct types of prey.
On the other hand, Bush and Wilson [3] observed the movement of a paradise fish in a two-choice situation under the reinforcement-extinction and the habit formation behaviors. They claimed that under such behavior, there are four distinct outcomes: Left-reward, right-nonreward, right-reward, left-nonreward.
It is usually believed that being awarded for choosing one side increases the probability of selecting that specific side in the subsequent trials. However, the rationale for unrewarding experiences is less obvious. According to extinction or reinforcement theory (see Table 1), the probability of selecting an unrewarded side in the subsequent trial would decrease.
Fish's Responses | Outcomes (Left side) | Outcomes (Right side) | Events |
Reinforcement | η1x | η1x+1−η1 | ERE1 |
Non-reinforcement | η2x+1−η2 | η2x | ERE2 |
By contrast, a model that depends on habit formation or secondary reinforcement (see Table 2) would indicate that merely picking a side increases the chances of choosing that side in subsequent trials.
Fish's Responses | Outcomes (Left side) | Outcomes (Right side) | Events |
Reinforcement | η1x | η1x+1−η1 | EHF1 |
Non-reinforcement | η2x | η2x++1−η2 | EHF2 |
In 1967, Epstein [4] proposed the following functional equation to discuss the learning process of animals in a two-choice situation
L(x)=(ex1−ex)L(k1x)+(1−ex1−ex)L(k2x), ∀x∈V, | (1.2) |
where L:V→R is an unknown and k1,k2:V→R are given mappings. The analytical solution of the above equation was calculated by using the bilateral Laplace transformation.
Recently, Turab and Sintunavarat [5] utilized the above idea and suggested the functional equation stated below
L(x)=xL(ϖ1x+(1−ϖ1)Θ1)+(1−x)L(ϖ2x+(1−ϖ2)Θ2), ∀x∈V, | (1.3) |
where L:V→R is an unknown, 0<ϖ1≤ϖ2<1 and Θ1,Θ2∈V. The aforementioned functional equation was used to study a specific kind of psychological resistance of dogs enclosed in a small box.
Several other studies on human actions in probability-learning scenarios have produced different results (see [6,7,8,9,10,11]).
The point to ponder is that most studies in the literature related to the behavior of animals in a two-choice situation just focused on the movement of the animals towards a specific choice. In contrast, by focusing on the food placement and the chosen side, Bush and Wilson [3] divided such types of responses into four events (right-reward, right-nonreward, left-reward, left-nonreward). Such events and their corresponding probabilities can be seen in Table 3 below.
Events | Responses and outcomes | Corresponding probabilities |
E1 | right-reward (food side) | τν |
E2 | right-nonreward (non-food side) | (1−τ)ν |
E3 | left-reward (food side) | τ(1−ν) |
E4 | left-nonreward (non-food side) | (1−τ)(1−ν) |
To cover the gap discussed above, here, we propose the following general stochastic functional equation
L(x)=τν(x)L(k1(x))+(1−τ)ν(x)L(k2(x))+τ(1−ν(x))L(k3(x))+(1−τ)(1−ν(x))L(k4(x)), | (1.4) |
where L:V→R is an unknown function, 0≤τ≤1 represents the probability of choosing the food side. Also, ν:V→V and k1,k2,k3,k4:V→V are given mappings that represent the four options, based on the chosen side and the reward, discussed in Table 3.
Our objective is to prove the existence and uniqueness of solutions to the above Eq (1.4) by utilizing the Banach fixed point theorem (for the details of fixed point theory, we refer [12,13,14,15,16]). Following that, we provide two examples to demonstrate the importance of our findings in this area. Finally, we examine the Hyers-Ulam and Hyers-Ulam-Rassias (shortly, HU and HUR) type stability of the suggested stochastic equation's solution.
The following stated outcome will be required in the advancement.
Definition 1.1 ([17]). Let (V,d) be a metric space. A mapping L:V→V is called a
(1) Banach contraction mapping (or, BCM) if there is a nonnegative real number λ<1 such that
d(Lμ,Lυ)≤λd(μ,υ) | (1.5) |
for all μ,υ∈V.
(2) Contractive mapping if
d(Lμ,Lυ)<d(μ,υ) | (1.6) |
for all μ,υ∈V with μ≠υ.
(3) Non-expansive mapping if
d(Lμ,Lυ)≤d(μ,υ) | (1.7) |
for all μ,υ∈V.
Theorem 1.2 ([18]). Let (V,d) be a complete metric space and L:V→V be a BCM with λ<1. Then L has precisely one fixed point. Furthermore, the Picard iteration {μn} in V which is defined as μn=Lμn−1 for all n∈N, where μ0∈V, converges to the unique fixed point of L.
Let V=[0,1]. The class consisting of all continuous real-valued functions L:V→R such that L(0)=0 and
supv1≠v2|L(v1)−L(v2)||v1−v2|<∞ |
is denoted by D. Here, it is straightforward that (D,‖⋅‖) is a Banach space (for the detail, see [19]), where
‖L‖=supv1≠v2|L(v1)−L(v2)||v1−v2| | (2.1) |
for all L∈D.
Next, we rewrite (1.4) as
L(x)=τν(x)L(k1(x))+(1−τ)ν(x)L(k2(x))+τ(1−ν(x))L(k3(x))+(1−τ)(1−ν(x))L(k4(x)), | (2.2) |
where L:V→R is an unknown function and k1,k2,k3,k4:V→V are given contraction mappings with contractive coefficients η1,η2,η3,η4, respectively and k3(0)=0=k4(0). Also, ν:V→V is a given non-expansive mapping with ν(0)=0 and |ν(x)|≤η5 (η5≥0), for all x∈V.
Theorem 2.1. Consider the generalized stochastic equation (2.2). Assume that λ1<1, where λ1 is defined as
λ1:=[τ(η1(1+η5)+2η3)+(1−τ)(η2(1+η5)+2η4)], | (2.3) |
and k1(0)=0=k2(0).Assume that there is a nonempty subset O of T:={L∈D|L(1)≤1} such that (O,‖⋅‖) is a Banach space, where ‖⋅‖ is given in (2.1).Then (2.2) has a unique solution. Furthermore, the sequence {Ln} in O (∀n∈N), where L0 is given in O, given by
Ln(x)=τν(x)Ln−1(k1(x))+(1−τ)ν(x)Ln−1(k2(x))+τ(1−ν(x))Ln−1(k3(x))+(1−τ)(1−ν(x))Ln−1(k4(x)), | (2.4) |
converges to a unique solution of (2.2).
Proof. Let d:O×O→R be a metric induced by ‖⋅‖. Thus (O,d) is a complete metric space. We deal with the operator K from O which is defined as
(KL)(x)=τν(x)L(k1(x))+(1−τ)ν(x)L(k2(x))+τ(1−ν(x))L(k3(x))+(1−τ)(1−ν(x))L(k4(x)), |
for all L∈O.
For each L, we obtain (KL)(0)=0. Also, K is continuous and ‖KL‖<∞ for all L∈O. Thus, K is a self operator on O. Moreover, the solution of (2.2) is clearly equal to K's fixed point. Since K is a linear mapping, for L1,L2∈O, we obtain
‖KL1−KL2‖=‖K(L1−L2)‖. |
Thus, to evaluate ‖KL1−KL2‖, we mark the subsequent framework
Υx1,x2:=K(L1−L2)(x1)−K(L1−L2)(x2)x1−x2, x1,x2∈V,x1≠x2. |
For each distinct x1,x2∈V, we get
Υx1,x2=1x1−x2[τν(x1)(L1−L2)(k1(x1))+(1−τ)ν(x1)(L1−L2)(k2(x1))+τ(1−ν(x1))(L1−L2)(k3(x1))+(1−τ)(1−ν(x1))(L1−L2)(k4(x1))−τν(x2)(L1−L2)(k1(x2))−(1−τ)ν(x2)(L1−L2)(k2(x2))−τ(1−ν(x2))(L1−L2)(k3(x2))−(1−τ)(1−ν(x2))(L1−L2)(k4(x2))]=1x1−x2[τν(x1)(L1−L2)(k1(x1))−τν(x1)(L1−L2)(k1(x2))+(1−τ)ν(x1)(L1−L2)(k2(x1))−(1−τ)ν(x1)(L1−L2)(k2(x2))+τ(1−ν(x1))(L1−L2)(k3(x1))−τ(1−ν(x1))(L1−L2)(k3(x2))+(1−τ)(1−ν(x1))(L1−L2)(k4(x1))−(1−τ)(1−ν(x1))(L1−L2)(k4(x2))+τν(x1)(L1−L2)(k1(x2))−τν(x2)(L1−L2)(k1(x2))+(1−τ)ν(x1)(L1−L2)(k2(x2))−(1−τ)ν(x2)(L1−L2)(k2(x2))+τ(1−ν(x1))(L1−L2)(k3(x2))−τ(1−ν(x2))(L1−L2)(k3(x2))+(1−τ)(1−ν(x1))(L1−L2)(k4(x2))−(1−τ)(1−ν(x2))(L1−L2)(k4(x2))]. |
Then we have
![]() |
As k1−k4:V→V are BCM with contractive coefficients η1−η4, respectively with k3(0)=0=k4(0), and ν:V→V is a non-expansive mapping with ν(0)=0 and |ν(x)|≤η5 (η5≥0), for all x∈V. Therefore by using (2.1), we have
|Υx1,x2|≤η1τ|ν(x1)|‖L1−L2‖+η2(1−τ)|ν(x1)|‖L1−L2‖+η3τ|1−ν(x1)|‖L1−L2‖+η4(1−τ)|1−ν(x1)|‖L1−L2‖+|τ(L1−L2)(k1(x2))−τ(L1−L2)(k1(0))|+|(1−τ)(L1−L2)(k2(x2))−(1−τ)(L1−L2)(k2(0))|+|τ(L1−L2)(k3(x2))−τ(L1−L2)(k3(0))|+|(1−τ)(L1−L2)(k4(x2))−(1−τ)(L1−L2)(k4(0))|=η1η5τ‖L1−L2‖+η2η5(1−τ)‖L1−L2‖+η3τ‖L1−L2‖+η4(1−τ)‖L1−L2‖+η1τx2‖L1−L2‖+η2(1−τ)x2‖L1−L2‖+η3τx2‖L1−L2‖+η4(1−τ)x2‖L1−L2‖=λ1‖L1−L2‖, |
where λ1 is given in (2.3). This gives that
d(KL1,KL2)=‖KL1−KL2‖≤λ1‖L1−L2‖=λ1d(L1,L2). |
As 0<λ1<1, so by Theorem 1.2, we get the unique solution of (2.2).
Here, Theorem 2.1 leads to the conclusion stated below.
Corollary 2.2. Consider the generalized stochastic equation (2.2). Assume that k1,k2,k3,k4:V→V are contraction mappings with contractive coefficients η1,η2,η3,η4, with η1≤η2≤η3≤η4 and k1(0)=0=k2(0). Also, ˜λ1:=η4(3+η5)<1, and assume that there is a nonempty subset O of T:={L∈D|L(1)≤1} such that (O,‖⋅‖) is a Banach space, where ‖⋅‖ is given in (2.1).Then (2.2) has a unique solution. Furthermore, the sequence {Ln} in O (∀n∈N), where L0 is given in O, given by
Ln(x)=τν(x)Ln−1(k1(x))+(1−τ)ν(x)Ln−1(k2(x))+τ(1−ν(x))Ln−1(k3(x))+(1−τ)(1−ν(x))Ln−1(k4(x)), | (2.5) |
converges to a unique solution of (2.2).
The conditions k1(0)=0=k2(0) are sufficient but not necessary to prove the main results. Our next outcomes are independent of such conditions.
Theorem 2.3. Consider the generalized stochastic equation (2.2). Suppose that, there exist η6,η7≥0 such that
|k1(x)|≤η6and|k2(x)|≤η7,forallx∈V, | (2.6) |
and that λ2<1, where λ2 is defined as
λ2:=[τ(η1η5+2η3+η6))+(1−τ)(η2η5+2η4+η7))]. | (2.7) |
Assume that there is a nonempty subset O of T:={L∈D|L(1)≤1} such that (O,‖⋅‖) is a Banach space, where ‖⋅‖ is given in (2.1).Then (2.2) has a unique solution. Furthermore, the sequence {Ln} in O (∀n∈N), where L0 is given in O, given by
Ln(x)=τν(x)Ln−1(k1(x))+(1−τ)ν(x)Ln−1(k2(x))+τ(1−ν(x))Ln−1(k3(x))+(1−τ)(1−ν(x))Ln−1(k4(x)), | (2.8) |
converges to a unique solution of (2.2).
Proof. Let d:O×O→R be a metric induced by ‖⋅‖. Thus (O,d) is a complete metric space. We deal with the operator K from O which is defined as
(KL)(x)=τν(x)L(k1(x))+(1−τ)ν(x)L(k2(x))+τ(1−ν(x))L(k3(x))+(1−τ)(1−ν(x))L(k4(x)), |
for all L∈O.
For each L, we obtain (KL)(0)=0. Also, K is continuous and ‖KL‖<∞ for all L∈O. Thus, K is a self operator on O. Moreover, the solution of (2.2) is clearly equal to K's fixed point. Since K is a linear mapping, so for L1,L2∈O, we get
‖KL1−KL2‖=‖K(L1−L2)‖. |
Thus, to evaluate ‖KL1−KL2‖, we mark the following framework
Υx1,x2:=K(L1−L2)(x1)−K(L1−L2)(x2)x1−x2, x1,x2∈V,x1≠x2. |
For each distinct x1,x2∈V, we obtain
Υx1,x2=1x1−x2[τν(x1)(L1−L2)(k1(x1))+(1−τ)ν(x1)(L1−L2)(k2(x1))+τ(1−ν(x1))(L1−L2)(k3(x1))+(1−τ)(1−ν(x1))(L1−L2)(k4(x1))−τν(x2)(L1−L2)(k1(x2))−(1−τ)ν(x2)(L1−L2)(k2(x2))−τ(1−ν(x2))(L1−L2)(k3(x2))−(1−τ)(1−ν(x2))(L1−L2)(k4(x2))]=1x1−x2[τν(x1)(L1−L2)(k1(x1))−τν(x1)(L1−L2)(k1(x2))+(1−τ)ν(x1)(L1−L2)(k2(x1))−(1−τ)ν(x1)(L1−L2)(k2(x2))+τ(1−ν(x1))(L1−L2)(k3(x1))−τ(1−ν(x1))(L1−L2)(k3(x2))+(1−τ)(1−ν(x1))(L1−L2)(k4(x1))−(1−τ)(1−ν(x1))(L1−L2)(k4(x2))+τν(x1)(L1−L2)(k1(x2))−τν(x2)(L1−L2)(k1(x2))+(1−τ)ν(x1)(L1−L2)(k2(x2))−(1−τ)ν(x2)(L1−L2)(k2(x2))+τ(1−ν(x1))(L1−L2)(k3(x2))−τ(1−ν(x2))(L1−L2)(k3(x2))+(1−τ)(1−ν(x1))(L1−L2)(k4(x2))−(1−τ)(1−ν(x2))(L1−L2)(k4(x2))]. |
Then we have
![]() |
Here k1−k4:V→V are BCM with contractive coefficients η1−η4, respectively and satisfies the condition (2.6). Also, ν:V→V is a non-expansive mapping with ν(0)=0 and |ν(x)|≤η5 (η5≥0), for all x∈V. Thus by using (2.1), we have
|Υx1,x2|≤η1τ|ν(x1)|‖L1−L2‖+η2(1−τ)|ν(x1)|‖L1−L2‖+η3τ|1−ν(x1)|‖L1−L2‖+η4(1−τ)|1−ν(x1)|‖L1−L2‖+|τ(L1−L2)(k1(x2))−τ(L1−L2)(0)|+|(1−τ)(L1−L2)(k2(x2))−(1−τ)(L1−L2)(0)|+|τ(L1−L2)(k3(x2))−τ(L1−L2)(k3(0))|+|(1−τ)(L1−L2)(k4(x2))−(1−τ)(L1−L2)(k4(0))|=η1η5τ‖L1−L2‖+η2η5(1−τ)‖L1−L2‖+η3τ‖L1−L2‖+η4(1−τ)‖L1−L2‖+η6τ‖L1−L2‖+η7(1−τ)‖L1−L2‖+η3τx2‖L1−L2‖+η4(1−τ)x2‖L1−L2‖=λ2‖L1−L2‖, |
where λ2 is given in (2.7). This gives that
d(KL1,KL2)=‖KL1−KL2‖≤λ2‖L1−L2‖=λ2d(L1,L2). |
As 0<λ2<1, so by Theorem 1.2, we get the unique solution of (2.2).
The following conclusion is derived from Theorem 2.3.
Corollary 2.4. Consider the generalized stochastic equation (2.2). Assume that k1,k2,k3,k4:V→V are contraction mappings with contractive coefficients η1,η2,η3,η4 with η1≤η2≤η3≤η4 and there exist η6,η7≥0 such that
|k1(x)|≤η6and|k2(x)|≤η7,forallx∈V, | (2.9) |
and that ˜λ2<1, where ˜λ2 is defined as
˜λ2:=[(2+η5)η4+η7+(η6−η7)τ]. | (2.10) |
Assume that there is a nonempty subset O of T:={L∈D|L(1)≤1} such that (O,‖⋅‖) is a Banach space, where ‖⋅‖ is given in (2.1).Then (2.2) has a unique solution. Furthermore, the sequence {Ln} in O (∀n∈N), where L0 is given in O, given as
Ln(x)=τν(x)Ln−1(k1(x))+(1−τ)ν(x)Ln−1(k2(x))+τ(1−ν(x))Ln−1(k3(x))+(1−τ)(1−ν(x))Ln−1(k4(x)), | (2.11) |
converges to a unique solution of (2.2).
Remark 2.5. Our proposed generalized stochastic equation (2.2) is a generalization of many mathematical models in the particular research (including equations discussed in the introduction section). For instance
(1) If we put τ=0 and define ν,k2,k4:V→V by
ν(x)=x,k2(x)=η1x+1−η1 and k4(x)=η2x, |
where 0<η1≤η2<1, then our proposed model (2.2) is equivalent to the model examined in [20].
(2) If we put τ=1. Define ν(x)=x and k1,k3:V→V as BCM having contractive constants η1 and η2 respectively with η1≤η2, then our proposed stochastic equation (2.2) is equivalent to the functional equations examined in [21,22].
To support our argument, we now present the subsequent examples.
Example 2.6. Consider the stochastic equation stated below
L(x)=τxL(x13)+(1−τ)xL(3x14)+τ(1−x)L(x9)+(1−τ)(1−x)L(2x11) | (2.12) |
for all x∈V, where L:V→R is an unknown function. If we define mappings ν,k1,k2,k3,k4:V→V by
ν(x)=x,k1(x)=x13,k2(x)=3x14,k3(x)=x9 andk4(x)=2x11 |
for all x∈V, then our generalized stochastic equation (2.2) reduces to the Eq (2.12).
Now, our aim is to use Theorem 2.1 to find the existence of a unique solution to the above problem. Here, k1,k2,k3,k4 are contraction mappings with contractive coefficients η1=113,η2=314,η3=19 and η4=211, respectively, and k1(0)=k2(0)=k3(0)=k4(0)=0. Also, ν:V→V is a non-expansive mapping with ν(0)=0 and η5=1. Thus,
λ1:=[τ(η1(1+η5)+2η3)+(1−τ)(η2(1+η5)+2η4)]=113629(10797−7409τ)<1, |
for all τ∈V. All of the Theorem 2.1's premises are now true. As a result, there is only one solution to the functional equation (2.12).
Furthermore, if we pick L0(x)=x for all x∈V as an initial approximation, then the next iteration will converge to a unique solution (2.12).
L1(x)=118018[−1201τx2+585x2−1274τx+376x],L2(x)=15849513501832[7520845753τ2x3−8677708650τx3+2442459825x3+35644603904τ2x2−101143007784τx2+40809099240x2+29244583368τ2x−150400714464τx+193372347168x],⋮Ln(x)=τxLn−1(x13)+(1−τ)xLn−1(3x14)+τ(1−x)Ln−1(x9)+(1−τ)(1−x)Ln−1(2x11) |
for all n∈N.
Example 2.7. Consider the stochastic equation stated below
L(x)=τxL(ax+1−a2)+(1−τ)xL(bx+1−b2)+τ(1−x)L(cx2)+(1−τ)(1−x)L(dx2) | (2.13) |
for all x∈V and 0<a,b,c,d<1, where L:V→R is an unknown function. Also, if we define mappings ν,k1,k2,k3,k4:V→V by
ν(x)=x,k1(x)=ax+1−a2,k2(x)=bx+1−b2,k3(x)=cx2 andk4(x)=dx2 |
for all x∈V, then the generalized stochastic equation (2.2) reduces to the Eq (2.13).
We next attempt to solve the problem by using Theorem 2.3. Here, k1,k2,k3,k4 are contraction mappings with contractive coefficients η1=a2,η2=b2,η3=c2 and η4=d2, respectively, and ν:V→V is a non-expansive mapping with ν(0)=0 and η5=1. Also,
|k1(x)|≤12 and |k2(x)|≤12, for all x∈V, |
and
k3(0)=k4(0)=0. |
Thus,
λ2:=[τ(η1η5+2η3+η6))+(1−τ)(η2η5+2η4+η7))]=[τ2(a+2c+1)+(1−τ)2(b+2d+1)]. |
Now, all the hypotheses of Theorem 2.3 are fulfilled. Thus, (2.9) has a unique solution if |λ2|<1.
Furthermore, if we pick L0(x)=x for all x∈V as an initial approximation, the next iteration will converge to a unique solution (2.13).
L1(x)=12[(b+aτ−bτ−cτ+tτ−d)x2+(−b−aτ+bτ+cτ+1−dτ+d)x],L2(x)=τxL1(ax+1−a2)+(1−τ)xL1(bx+1−b2)+τ(1−x)L1(cx2)+(1−τ)(1−x)L1(dx2),⋮Ln(x)=τxLn−1(ax+1−a2)+(1−τ)xLn−1(bx+1−b2)+τ(1−x)Ln−1(cx2)+(1−τ)(1−x)Ln−1(dx2) |
for all n∈N.
In mathematical modeling theory, the consistency of solutions is critical. Slight changes in the data set, such as those caused by natural measurement mistakes, have no corresponding impact on the conclusion. Hence, it is essential to analyze the stability of the suggested functional equation (1.4)' solution. For the details of HU and HUR stability, we refer [23,24,25,26,27,28,29,30].
Theorem 3.1. Under the hypothesis of Theorem 2.1, the equation KL=L, where K:O→O is given as
(KL)(x)=τν(x)L(k1(x))+(1−τ)ν(x)L(k2(x))+τ(1−ν(x))L(k3(x))+(1−τ)(1−ν(x))L(k4(x)), | (3.1) |
for all L∈O and x∈V, has HUR stability, that is, for a fixed function ϑ:O→[0,∞), we have that for every L∈O with d(KL,L)≤ϑ(L), there exists a unique L⋆∈O such that KL⋆=L⋆ and d(L,L⋆)≤ℓϑ(L) for some ℓ>0.
Proof. Let L∈O such that d(KL,L)≤ϑ(L). By utilizing Theorem 2.1, we have a unique L⋆∈O such that KL⋆=L⋆. Therefore, we obtain
d(L,L⋆)≤d(L,KL)+d(KL,L⋆)≤ϑ(L)+d(KL,KL⋆)≤ϑ(L)+λ1d(L,L⋆), |
where λ1 is given in (2.3), and so
d(L,L⋆)≤ℓϑ(L), |
where ℓ:=11−λ1.
From the above analysis, we obtain the result corresponding to the HU stability.
Corollary 3.2. Under the hypothesis of Theorem 2.1, the equation KL=L, where K:O→O isgiven as
(KL)(x)=τν(x)L(k1(x))+(1−τ)ν(x)L(k2(x))+τ(1−ν(x))L(k3(x))+(1−τ)(1−ν(x))L(k4(x)), | (3.2) |
for all L∈O and x∈V, has HU stability, that is, for a fixed ξ>0, we have that for every L∈O with d(KL,L)≤ξ, there exists a unique L⋆∈O such that KL⋆=L⋆ and d(L,L⋆)≤ℓξ, forsome ℓ>0.
Mathematical psychology is a branch of psychology that is oriented toward mathematical modeling. Simultaneously, the learning process in human beings or animals can be viewed as a set of possible reactions. From this perspective, most of the learning research focuses on determining the probability of trial-to-trial occurrences that characterizes a stochastic process. In this work, we proposed a general stochastic functional equation that can be used to discuss numerous psychological learning theory experiments on animals and humans in the existing literature. In addition, we examined the existence, uniqueness, and stability of a solution to the suggested generalized stochastic equation by utilizing the fixed-point theory tools. Two examples are also given that show the importance of our results in this area of research.
We would like to express our sincere gratitude to the anonymous referee for his/her helpful comments that will help to improve the quality of the manuscript.
The authors declare that they have no competing interests.
[1] | R. Bush, F. Mosteller, Stochastic models for learning, New York: Wiley, 1955. |
[2] | V. I. Istrǎţescu, On a functional equation, J. Math. Anal. Appl., 56 (1976), 133–136. https://doi.org/10.1016/0022-247X(76)90012-3 |
[3] |
A. A. Bush, T. R. Wilson, Two-choice behavior of paradise fish, J. Exp. Psychol., 51 (1956), 315–322. https://doi.org/10.1037/h0044651 doi: 10.1037/h0044651
![]() |
[4] |
B. Epstein, On a difference equation arising in a learning-theory model, Israel J. Math., 4 (1966), 145–152. https://doi.org/10.1007/BF02760073 doi: 10.1007/BF02760073
![]() |
[5] | A. Turab, W. Sintunavarat, On the solution of the traumatic avoidance learning model approached by the Banach fixed point theorem, J. Fixed Point Theory Appl., 22 (2020). https://doi.org/10.1007/s11784-020-00788-3 |
[6] |
A. Turab, W. G. Park, W. Ali, Existence, uniqueness, and stability analysis of the probabilistic functional equation emerging in mathematical biology and the theory of learning, Symmetry, 13 (2021), 1313. https://doi.org/10.3390/sym13081313 doi: 10.3390/sym13081313
![]() |
[7] |
A. Turab, W. Sintunavarat, Some particular aspects of certain type of probabilistic predator-prey model with experimenter-subject-controlled events and the fixed point method, AIP Conf. Proc., 2423 (2021), 060005. https://doi.org/10.1063/5.0075282 doi: 10.1063/5.0075282
![]() |
[8] |
W. K. Estes, J. H. Straughan, Analysis of a verbal conditioning situation in terms of statistical learning theory, J. Exp. Psychol., 47 (1954), 225–234. https://doi.org/10.1037/h0060989 doi: 10.1037/h0060989
![]() |
[9] |
D. A. Grant, H. W. Hake, J. P. Hornseth, Acquisition and extinction of a verbal conditioned response with differing percentages of reinforcement, J. Exp. Psychol., 42 (1951), 1–5. https://doi.org/10.1037/h0054051 doi: 10.1037/h0054051
![]() |
[10] |
L. G. Humphreys, Acquisition and extinction of verbal expectations in a situation analogous to conditioning, J. Exp. Psychol., 25 (1939), 294–301. https://doi.org/10.1037/h0053555 doi: 10.1037/h0053555
![]() |
[11] |
M. E. Jarvik, Probability learning and a negative recency effect in the serial anticipation of alternative symbols, J. Exp. Psychol., 41 (1951), 291–297. https://doi.org/10.1037/h0056878 doi: 10.1037/h0056878
![]() |
[12] | H. Aydi, E. Karapinar, V. Rakocevic, Nonunique fixed point theorems on b-metric spaces via simulation functions, Jordan J. Math. Stat., 12 (2019), 265–288. |
[13] |
E. Karapinar, Recent advances on the results for nonunique fixed in various spaces, Axioms, 8 (2019), 72. https://doi.org/10.3390/axioms8020072 doi: 10.3390/axioms8020072
![]() |
[14] |
H. H. Alsulami, E. Karapinar, V. Rakočević, Ciric type nonunique fixed point theorems on b-metric spaces, Filomat, 31 (2017), 3147–3156. https://doi.org/10.2298/FIL1711147A doi: 10.2298/FIL1711147A
![]() |
[15] | H. Lakzian, D. Gopal, W. Sintunavarat, New fixed point results for mappings of contractive type with an application to nonlinear fractional differential equations, J. Fixed Point Theory Appl., 18 (2016), 251–266. https://doi.org/10.1007/s11784-015-0275-7 |
[16] | P. Baradol, D. Gopal, S. Radenović, Computational fixed points in graphical rectangular metric spaces with application, J. Comput. Appl. Math., 375 (2020), 112805. https://doi.org/10.1016/j.cam.2020.112805 |
[17] | V. Berinde, Iterative approximation of fixed points, Springer, 2007. https://doi.org/10.1007/978-3-540-72234-2 |
[18] | S. Banach, Sur les opérations dans les ensembles abstraits et leur applications aux équations intégrales, Fund. Math., 3 (1922), 133–181. |
[19] |
A. Turab, W. Sintunavarat, Corrigendum: On analytic model for two-choice behavior of the paradise fish based on the fixed point method, J. Fixed Point Theory Appl. 2019, 21:56, J. Fixed Point Theory Appl., 22 (2020), 82. https://doi.org/10.1007/s11784-020-00818-0 doi: 10.1007/s11784-020-00818-0
![]() |
[20] |
A. Turab, W. Sintunavarat, On analytic model for two-choice behavior of the paradise fish based on the fixed point method, J. Fixed Point Theory Appl., 21 (2019), 56. https://doi.org/10.1007/s11784-019-0694-y doi: 10.1007/s11784-019-0694-y
![]() |
[21] |
A. Turab, W. Sintunavarat, On the solutions of the two preys and one predator type model approached by the fixed point theory, Sadhana, 45 (2020), 211. https://doi.org/10.1007/s12046-020-01468-1 doi: 10.1007/s12046-020-01468-1
![]() |
[22] | V. Berinde, A. R. Khan, On a functional equation arising in mathematical biology and theory of learning, Creat. Math. Inform., 24 (2015), 9–16. |
[23] |
Th. M. Rassias, On the stability of the linear mapping in Banach spaces, Proc. Amer. Math. Soc., 72 (1978), 297–300. https://doi.org/10.2307/2042795 doi: 10.2307/2042795
![]() |
[24] |
D. H. Hyers, On the stability of the linear functional equation, Proc. Nat. Acad. Sci. USA, 27 (1941), 222–224. https://doi.org/10.1073/pnas.27.4.222 doi: 10.1073/pnas.27.4.222
![]() |
[25] | S. M. Ulam, A collection of the mathematical problems, New York: Interscience, 1960. |
[26] |
T. Aoki, On the stability of the linear transformation in Banach spaces, J. Math. Soc. Japan, 2 (1950), 64–66. https://doi.org/10.2969/jmsj/00210064 doi: 10.2969/jmsj/00210064
![]() |
[27] | D. H. Hyers, G. Isac, Th. M. Rassias, Stability of functional equations in several variables, Basel: Birkhauser, 1998. |
[28] |
J. H. Bae, W. G. Park, A fixed point approach to the stability of a Cauchy-Jensen functional equation, Abst. Appl. Anal., 2012 (2012), 205160. https://doi.org/10.1155/2012/205160 doi: 10.1155/2012/205160
![]() |
[29] |
M. Gachpazan, O. Bagdani, Hyers-Ulam stability of nonlinear integral equation, Fixed Point Theory Appl., 2020 (2010), 927640. https://doi.org/10.1155/2010/927640 doi: 10.1155/2010/927640
![]() |
[30] |
J. S. Morales, E. M. Rojas, Hyers-Ulam and Hyers-Ulam-Rassias stability of nonlinear integral equations with delay, Int. J. Nonlinear Anal. Appl., 2 (2011), 1–6. https://doi.org/10.22075/IJNAA.2011.47 doi: 10.22075/IJNAA.2011.47
![]() |
1. | Reny George, Zoran D. Mitrović, Ali Turab, Ana Savić, Wajahat Ali, On a Unique Solution of a Class of Stochastic Predator–Prey Models with Two-Choice Behavior of Predator Animals, 2022, 14, 2073-8994, 846, 10.3390/sym14050846 | |
2. | Ali Turab, On a unique solution and stability analysis of a class of stochastic functional equations arising in learning theory, 2022, 42, 0174-4747, 261, 10.1515/anly-2022-1052 | |
3. | Ali Turab, Nabil Mlaiki, Nahid Fatima, Zoran D. Mitrović, Wajahat Ali, Analysis of a Class of Stochastic Animal Behavior Models under Specific Choice Preferences, 2022, 10, 2227-7390, 1975, 10.3390/math10121975 | |
4. | Muhammad Nazam, Sumit Chandok, Aftab Hussain, Hamed H. Al Sulmi, On the Iterative Multivalued ⊥-Preserving Mappings and an Application to Fractional Differential Equation, 2023, 12, 2075-1680, 53, 10.3390/axioms12010053 | |
5. | Ismat Beg, Mujahid Abbas, Muhammad Waseem Asghar, Approximation of the Solution of Split Equality Fixed Point Problem for Family of Multivalued Demicontractive Operators with Application, 2023, 11, 2227-7390, 959, 10.3390/math11040959 | |
6. | Ali Turab, Norhayati Rosli, Wajahat Ali, Juan J. Nieto, The existence and uniqueness of solutions to a functional equation arising in psychological learning theory, 2023, 56, 2391-4661, 10.1515/dema-2022-0231 |
Fish's Responses | Outcomes (Left side) | Outcomes (Right side) | Events |
Reinforcement | η1x | η1x+1−η1 | ERE1 |
Non-reinforcement | η2x+1−η2 | η2x | ERE2 |
Fish's Responses | Outcomes (Left side) | Outcomes (Right side) | Events |
Reinforcement | η1x | η1x+1−η1 | EHF1 |
Non-reinforcement | η2x | η2x++1−η2 | EHF2 |
Events | Responses and outcomes | Corresponding probabilities |
E1 | right-reward (food side) | τν |
E2 | right-nonreward (non-food side) | (1−τ)ν |
E3 | left-reward (food side) | τ(1−ν) |
E4 | left-nonreward (non-food side) | (1−τ)(1−ν) |
Fish's Responses | Outcomes (Left side) | Outcomes (Right side) | Events |
Reinforcement | η1x | η1x+1−η1 | ERE1 |
Non-reinforcement | η2x+1−η2 | η2x | ERE2 |
Fish's Responses | Outcomes (Left side) | Outcomes (Right side) | Events |
Reinforcement | η1x | η1x+1−η1 | EHF1 |
Non-reinforcement | η2x | η2x++1−η2 | EHF2 |
Events | Responses and outcomes | Corresponding probabilities |
E1 | right-reward (food side) | τν |
E2 | right-nonreward (non-food side) | (1−τ)ν |
E3 | left-reward (food side) | τ(1−ν) |
E4 | left-nonreward (non-food side) | (1−τ)(1−ν) |