Communication Topical Sections

Comparison of two commercial DNA extraction kits for the analysis of nasopharyngeal bacterial communities

  • Received: 31 March 2016 Accepted: 28 April 2016 Published: 29 April 2016
  • Characterization of microbial communities via next-generation sequencing (NGS) requires an extraction ofmicrobial DNA. Methodological differences in DNA extraction protocols may bias results and complicate inter-study comparisons. Here we compare the effect of two commonly used commercial kits (Norgen and Qiagen)for the extraction of total DNA on estimatingnasopharyngeal microbiome diversity. The nasopharynxis a reservoir for pathogens associated with respiratory illnesses and a key player in understandingairway microbial dynamics.
    Total DNA from nasal washes corresponding to 30 asthmatic children was extracted using theQiagenQIAamp DNA and NorgenRNA/DNA Purification kits and analyzed via IlluminaMiSeq16S rRNA V4 ampliconsequencing. The Norgen samples included more sequence reads and OTUs per sample than the Qiagen samples, but OTU counts per sample varied proportionallybetween groups (r = 0.732).Microbial profiles varied slightly between sample pairs, but alpha- and beta-diversity indices (PCoAand clustering) showed highsimilarity between Norgen and Qiagenmicrobiomes. Moreover, no significant differences in community structure (PERMANOVA and adonis tests) and taxa proportions (Kruskal-Wallis test) were observed betweenkits. Finally, aProcrustes analysis also showed low dissimilarity (M2 = 0.173; P< 0.001) between the PCoAs of the two DNA extraction kits.
    Contrary to what has been observed in previous studies comparing DNA extraction methods, our 16S NGS analysis of nasopharyngeal washes did not reveal significant differences in community composition or structure between kits. Our findingssuggest congruence between column-based chromatography kits and supportthe comparison of microbiomeprofilesacross nasopharyngeal metataxonomic studies.

    Citation: Marcos Pérez-Losada, Keith A. Crandall, Robert J. Freishtat. Comparison of two commercial DNA extraction kits for the analysis of nasopharyngeal bacterial communities[J]. AIMS Microbiology, 2016, 2(2): 108-119. doi: 10.3934/microbiol.2016.2.108

    Related Papers:

    [1] Nurul Akhma Zakaria, A.A. Kutty, M.A. Mahazar, Marina Zainal Abidin . Arsenic acute toxicity assessment on select freshwater organism species in Malaysia. AIMS Environmental Science, 2016, 3(4): 804-814. doi: 10.3934/environsci.2016.4.804
    [2] Patricia Morcillo, Maria Angeles Esteban, Alberto Cuesta . Mercury and its toxic effects on fish. AIMS Environmental Science, 2017, 4(3): 386-402. doi: 10.3934/environsci.2017.3.386
    [3] Antonio Zanin, Ivonez Xavier de Almeida, Francieli Pacassa, Fabricia Silva da Rosa, Paulo Afonso . Maturity level of environmental management in the pulp and paper supply chain. AIMS Environmental Science, 2021, 8(6): 580-596. doi: 10.3934/environsci.2021037
    [4] Djordje Vilimanovic, Gangadhar Andaluri, Robert Hannah, Rominder Suri, A. Ronald MacGillivray . Occurrence and aquatic toxicity of contaminants of emerging concern (CECs) in tributaries of an urbanized section of the Delaware River Watershed. AIMS Environmental Science, 2020, 7(4): 302-319. doi: 10.3934/environsci.2020019
    [5] Zeynab Rezazadeh Salteh, Saeed Fazayeli, Saeid Jafarzadeh Ghoushchi . Evaluation and prioritization of barriers to the implementation of the eco-regenerative supply chains using fuzzy ZE-numbers framework in group decision-making. AIMS Environmental Science, 2024, 11(4): 516-550. doi: 10.3934/environsci.2024026
    [6] Francesco Teodori . Health physics calculation framework for environmental impact assessment of radiological contamination. AIMS Environmental Science, 2021, 8(4): 403-420. doi: 10.3934/environsci.2021026
    [7] P. U. Singare, J.P. Shirodkar . Persistent and carcinogenic polycyclic aromatic hydrocarbons in the north-western coastal marine environment of India. AIMS Environmental Science, 2021, 8(2): 169-189. doi: 10.3934/environsci.2021012
    [8] Melanie Voigt, Indra Bartels, Anna Nickisch-Hartfiel, Martin Jaeger . Elimination of macrolides in water bodies using photochemical oxidation. AIMS Environmental Science, 2018, 5(5): 372-388. doi: 10.3934/environsci.2018.5.372
    [9] Fernanda P. Maciel, Joshua M. Peschel . A GIS-based tool for bioaccumulation risk analysis and its application to study polychlorinated biphenyls in the Great Lakes. AIMS Environmental Science, 2018, 5(1): 1-23. doi: 10.3934/environsci.2018.1.1
    [10] Jamshid Ali . Environmental resilience: transition to regenerative supply chain management. AIMS Environmental Science, 2024, 11(2): 107-128. doi: 10.3934/environsci.2024007
  • Characterization of microbial communities via next-generation sequencing (NGS) requires an extraction ofmicrobial DNA. Methodological differences in DNA extraction protocols may bias results and complicate inter-study comparisons. Here we compare the effect of two commonly used commercial kits (Norgen and Qiagen)for the extraction of total DNA on estimatingnasopharyngeal microbiome diversity. The nasopharynxis a reservoir for pathogens associated with respiratory illnesses and a key player in understandingairway microbial dynamics.
    Total DNA from nasal washes corresponding to 30 asthmatic children was extracted using theQiagenQIAamp DNA and NorgenRNA/DNA Purification kits and analyzed via IlluminaMiSeq16S rRNA V4 ampliconsequencing. The Norgen samples included more sequence reads and OTUs per sample than the Qiagen samples, but OTU counts per sample varied proportionallybetween groups (r = 0.732).Microbial profiles varied slightly between sample pairs, but alpha- and beta-diversity indices (PCoAand clustering) showed highsimilarity between Norgen and Qiagenmicrobiomes. Moreover, no significant differences in community structure (PERMANOVA and adonis tests) and taxa proportions (Kruskal-Wallis test) were observed betweenkits. Finally, aProcrustes analysis also showed low dissimilarity (M2 = 0.173; P< 0.001) between the PCoAs of the two DNA extraction kits.
    Contrary to what has been observed in previous studies comparing DNA extraction methods, our 16S NGS analysis of nasopharyngeal washes did not reveal significant differences in community composition or structure between kits. Our findingssuggest congruence between column-based chromatography kits and supportthe comparison of microbiomeprofilesacross nasopharyngeal metataxonomic studies.


    With the continuous development and progress of computing technology over the past decade, deep learning has taken a big step on the road of evolution. Deep learning can be found in many fields, such as imaging and natural language. Scholars have begun to apply deep learning to solve some complex partial differential equations (PDEs), which include PDEs with high order derivatives [1], high-dimensional PDEs [2,3], subdiffusion problems with noisy data [4] and so on. Based on the deep learning method, Raissi et al. [5] proposed a novel algorithm called the physics informed neural network (PINN), which has made excellent achievements for solving forward and inverse PDEs. It integrates physical information described by PDEs into a neural network. In recent years, the PINNs algorithm has attracted extensive attention. To solve forward and inverse problems of integro-differential equations (IDEs), Yuan et al. [6] proposed auxiliary physics informed neural network (A-PINN). Lin and Chen [7] designed a two-stage physics informed neural network for approximating localized wave solutions, which introduces the measurement of conserved quantities in stage two. Yang et al. [8] developed Bayesian physics informed neural networks (B-PINNs) that takes Bayesian neural network and Hamiltonian Monte Carlo or the variational inference as a priori and posteriori estimators, respectively. Scholars have also presented other variant algorithms of PINN, such as RPINNs [9] and $ hp $-VPINNs [10]. PINN has also achieved good performance for solving physical problems, including high-speed flows [11] and heat transfer problems [12]. In addition to integer-order differential equations, authors have studied the application of PINN in solving fractional differential equations such as fractional advection-diffusion equations (see Pang et al. [13] for fractional physics informed neural network (fPINNs)), high dimensional fractional PDEs (see Guo et al. [14] for Monte Carlo physics-informed neural networks (MC-PINNs)) and fractional water wave models (see Liu et al. [15] for time difference PINN).

    In recent decades, fractional differential equations (FDEs) have been concerned and studied in many fields, such as image denoising [16] and physics [17,18,19]. The reason why fractional differential equations have attracted wide attention is that they can more clearly describe complex physical phenomena. As an indispensable part of fractional problems, distributed-order differential equations are difficult to solve due to the complexity of distributed-order operators. To solve the time multi-term and distributed-order fractional sub-diffusion equations, Gao et al. [20] proposed a second order numerical difference formula. Jian et al. [21] derived a fast second-order implicit difference scheme of time distributed-order and Riesz space fractional diffusion-wave equations and analyzed the unconditional stability and second-order convergence. Li et al. [22] applied the mid-point quadrature rule with finite volume method to approximate the distributed-order equation. For the nonlinear distributed-order sub-diffusion model [23], the distributed-order derivative and the spatial direction were approximated by the FBN-$ \theta $ formula with a second-order composite numerical integral formula and the $ H^1 $-Galerkin mixed finite element method, respectively. In [24], Guo et al. adopted the Legendre-Galerkin spectral method for solving 2D distributed-order space-time reaction-diffusion equations. For the two-dimensional Riesz space distributed-order equation, Zhang et al. [25] used Gauss quadrature to calculate the distributed-order derivative and applied an alternating direction implicit (ADI) Galerkin-Legendre spectral scheme to approximate the spatial direction. For the distributed-order fourth-order sub-diffusion equation, Ran and Zhang [26] developed new compact difference schemes and proved their stability and convergence. In [27,28], authors developed spectral methods for the distributed-order time fractional fourth-order PDEs.

    As we know, distributed-order fractional PDEs can be regarded as the limiting case of multi-term fractional PDEs [29]. Moreover, Diethelm and Ford [30] have observed that small changes in the order of a fractional PDE lead to only slight changes in the final solution, which gives initial support to the employed numerical integration method. In view of this, we develop the FBN-$ \theta $ [31,32] with a second-order composite numerical integral formula combined with a multi-output neural network for solving 1D and 2D nonlinear time distributed-order models. Based on the idea of using a single output neural network combined with the discrete scheme of fractional models [13], we also use a single output neural network combined with a time discrete scheme to solve the nonlinear time distributed-order models. However, the accuracy of the prediction solution calculated by the single output neural network scheme is low and the training progress takes a lot of time. Therefore, we introduce a multi-output neural network to obtain the numerical solution of the time discrete scheme. Compared with the single output neural network scheme, the proposed multi-output neural network scheme has two main advantages as follows:

    ● Saving more computing time. The multi-output neural network scheme makes the sampling domain of the collocation points from the spatiotemporal domain to the spatial domain, which decreases the number of the training dataset and thus reduces the training time.

    ● Improving the accuracy of predicted solution. Due to the discrete scheme of the distributed-order derivative, the $ n $-th output item of the multi-output neural network will be constrained by the previous $ n-1 $ output items.

    The remainder of this article is as follows: In Section 2, we show what the components of neural network are and how to construct a neural network. In Section 3, we give the lemmas used to approximate the distributed-order derivative and the process of building the loss function. In Section 4, we provide some numerical results to confirm the capability of our proposed method. Finally, we make some conclusions in Section 5.

    In face of different objectives in various fields, scholars have developed many different types of neural networks, such as feed-forward neural network (FNN) [6], recurrent neural network (RNN) [33] and convolutional neural network (CNN) [34]. The FNN considered in this article can effectively solve most PDEs. Input layer, hidden layer and output layer are three indispensable components of FNN, which can be given, respectively, by

    $ input layer:Φ0(x)=xRdin,hidden layers:Φk(x)=σ(WkΦk1(x)+bk)Rλk,1kK1,output layer:ΦK(x)=WKΦK1(x)+bKRdout.
    $

    $ \boldsymbol W_k\in\mathbb{R}^{\lambda_k\times\lambda_{k-1}} $ and $ \boldsymbol b_k\in\mathbb{R}^{\lambda_k} $ represent the weight matrix and the bias vector in the $ k $th layer, respectively. We define $ \delta = \{\boldsymbol W_k, \boldsymbol b_k\}_{1\leq k\leq K} $, which is the trainable parameters of FNN. $ \lambda_k $ represents the number of neurons included in the $ k $th layer. $ \sigma $ is a nonlinear activation function. In this article, the hyperbolic tangent function [3,6] is selected as the activation function. There are many other functions that can be considered as activation functions, such as the rectified linear unit (ReLU) $ \sigma(x) = \max\{x, 0\} $ [4] and the logistic sigmoid $ \sigma(x) = \frac{1}{1+e^{-x}} $ [35].

    In this article, we consider a nonlinear distributed-order model with the following general form:

    $ {Dwtu+N(u)=f(x,t),(x,t)Ω×J,u(x,0)=u0(x),xˉΩ,
    $
    (3.1)

    where $ \Omega \subset \mathbb{R}^d(d\leq2) $ and $ J = (0, T] $. $ \mathcal{N}[\cdot] $ is a nonlinear differential operator. $ D^w_tu $ represents the distributed-order derivative and has the following definition:

    $ Dwtu(x,t)=10ω(α)C0Dαtu(x,t)dα,
    $
    (3.2)

    where $ \omega(\alpha) \geq 0 $, $ \int^1_0\omega(\alpha)d\alpha = c_0 > 0 $ and $ {}^C_0D^{\alpha}_tu(\boldsymbol x, t) $ is the Caputo fractional derivative expressed by

    $ C0Dαtu(x,t)={1Γ(1α)t0uη(x,η)(tη)αdη,0<α<1,ut(x,t),α=1.
    $
    (3.3)

    The specific boundary condition is determined by the practical problem.

    For simplicity, choosing a mesh size $ \Delta \alpha = \frac{1}{2I} $, we denote the nodes on the interval $ [0, 1] $ with coordinates $ \alpha_i = i\Delta \alpha $ for $ i = 0, 1, 2, \cdots, 2I $. The time interval $ [0, T] $ is divided as the uniform mesh with the grid points $ t_n = n\Delta t (n = 0, 1, 2, \cdots, N) $ and $ \Delta t = T/N $ is the time step size. We denote $ v^n \approx u^n = u(\boldsymbol x, t_n) $, $ u^{n+\frac{1}{2}}_t = \frac{u^{n+1} - u^n}{\Delta t} + O(\Delta t^2) $ and $ u^{n+\frac{1}{2}} : = \frac{u^{n+1}+u^n}{2} $. $ v^n $ is defined as the approximation solution of the time discrete scheme. The following lemmas are introduced to construct the numerical discrete formula of (3.1):

    Lemma 3.1. (See [23]) Supposing $ \omega(\alpha) \in C^2[0, 1] $, we can get

    $ 10ω(α)dα=Δα2Ik=0ciω(αi)Δα212ω(2)(γ),γ(0,1),
    $
    (3.4)

    where

    $ ci={12,i=0,2I,1,otherwise.
    $
    (3.5)

    Lemma 3.2. From [23,31,32], the discrete formula of the Caputo fractional derivative (3.3) can be obtained by

    $ C0Dαtu(x,tn+12)=C0Dαtun+1+C0Dαtun2+O(Δt2)=Δtαn+1s=0˜κ(α)n+1sus+O(Δt2),
    $
    (3.6)

    where

    $ ˜κ(α)n+1s={κ(α)02,s=n+1,κ(α)ns+κ(α)n+1s2,otherwise.
    $
    (3.7)

    The parameters $ \kappa^{(\alpha)}_{i}(i = 0, 1, \cdots, n+1) $ that are the coefficients of FBN-$ \theta $ $ (\theta \in [-\frac{1}{2}, 1]) $ can be given by

    $ κ(α)i={2α(1+αθ)(32θ)α,i=0,ϕ0κ(α)0ψ0,i=1,12ψ0[(ϕ0ψ1)κ(α)1+ϕ1κ(α)0],i=2,1iψ03j=1[ϕj1(ij)ψj]κ(α)ij,i3,
    $
    (3.8)

    where

    $ ϕi={2α(θ1)(αθ+1)+αθ(θ32),i=0,α(2θ23αθ+4αθ21),i=1,αθ(12θ+α2αθ),i=2,
    $
    (3.9)

    and

    $ ψi={12(32θ)(1+αθ),i=0,αθ2(32θ)2(1θ)(αθ+1),i=1,12(2θ1)(αθ+1)2αθ(θ1),i=2,12αθ(12θ).
    $
    (3.10)

    Lemma 3.3. (See [23]) The distributed-order term $ D^{\omega}_tu $ at $ t = t_{n+\frac{1}{2}} $ can be calculated by the following formula:

    $ Dωtu(x,tn+12)=Dωtun+1+Dωtun2+O(Δt2)=12n+1s=0βn+1sus+O(Δt2+Δα2),
    $
    (3.11)

    where

    $ βn+1s={ˆκns+ˆκn+1s,0s<n+1,ˆκ0,s=n+1,
    $
    (3.12)

    and

    $ ˆκns=2Ii=0φiΔtαiκ(αi)ns,φi=Δαω(αi)ci.
    $
    (3.13)

    Based on the above lemmas, the discrete scheme of the distributed-order model (3.1) at $ t = t_{n+\frac{1}{2}}(n = 0, 1, 2, \cdots, N-1) $ can be expressed by the following equality:

    $ 12n+1s=0βn+1svs+N(vn+12)=fn+12.
    $
    (3.14)

    Then we can obtain the system of equations as follows:

    $ {121s=0β1svs+N(v12)=f12,122s=0β2svs+N(v1+12)=f1+12,12Ns=0βNsvs+N(vN12)=fN12.
    $
    (3.15)

    The system of Eq (3.15) can be rewritten as the following matrix form:

    $ v(x)M+N(v(x))+v0(x)ρ0=f(x),
    $
    (3.16)

    where the symbols $ \boldsymbol\rho_0 $, $ \boldsymbol v(\boldsymbol x) $, $ \boldsymbol f(\boldsymbol x) $ and $ \boldsymbol{\mathcal{N}}(\boldsymbol v(\boldsymbol x)) $ are vectors, which are given, respectively, by

    $ ρ0=[12β10,12β20,12β30,,12βN0],v(x)=[v1(x),v2(x),,vN(x)],f(x)=[f12(x),f1+12(x),,fN12(x)],N(v(x))=[N(v12(x)),N(v1+12(x)),,N(vN12(x))].
    $

    The symbol $ M $ is an $ N\times N $ matrix that has the following definition:

    $ M=(12β1112β2112βN1012β2212βN20012βNN).
    $

    Now, we introduce a multi-output neural network $ \boldsymbol v(\boldsymbol x; \delta) = [v^1(\boldsymbol x; \delta), v^2(\boldsymbol x; \delta), \cdots, v^N(\boldsymbol x; \delta)] $ into Eq (3.16), which takes $ \boldsymbol x $ as an input and is used to approximate time discrete solutions $ \boldsymbol v(\boldsymbol x) = [v^1(\boldsymbol x), v^2(\boldsymbol x), \cdots, v^N(\boldsymbol x)] $. This will result in a multi-output PINN $ \boldsymbol \ell(\boldsymbol x) = [\ell^1(\boldsymbol x), \ell^2(\boldsymbol x), \cdots, \ell^N(\boldsymbol x)] $:

    $ (x)=v(x;δ)M+N(v(x;δ))+v0(x)ρ0f(x),
    $
    (3.17)

    where $ \ell^{n+1}(\boldsymbol x) $ is denoted as residual error of the discrete scheme (3.14), which is given by

    $ n+1(x)=12n+1s=1βn+1svs(x;δ)+N(vn+12(x;δ))+12βn+10v0(x)fn+12(x),n=0,1,2,,N1.
    $
    (3.18)

    The loss function is constructed in the form of mean square error. Combined with the boundary condition loss, the total loss function can be expressed by the following formula:

    $ MSEtotal=MSE+MSEb,
    $
    (3.19)

    where

    $ MSE=1N×NxNj=1Nxi=1|j(xi)|2,
    $
    (3.20)

    and boundary condition loss

    $ MSEb=1N×NbNj=1Nbi=1|vj(xib;δ)uj(xib)|2.
    $
    (3.21)

    Here, $ \{\boldsymbol x^i_{\ell}\}^{N_x}_{i = 1} $ corresponds to the collocation points on the space domain $ \Omega $ and $ \{\boldsymbol x^i_{b}\}^{N_b}_{i = 1} $ denotes the boundary training data. The schematic diagram of using the multi-output neural network scheme to solve nonlinear time distributed-order models is shown in Figure 1.

    Figure 1.  A multi-output neural network framework to solve nonlinear time distributed-order models, where $ MSE_* $ represents the loss function.

    In this section, we consider two nonlinear time distributed-order equations to verify the feasibility and effectiveness of our proposed method. The performance is evaluated by calculating the relative $ L^2 $ error between the predicted and exact solutions. The definition of relative $ L^2 $ error is given by

    $ ||uv||L2=Nj=1Nxi=1|uj(xi)vj(xi)|2Nj=1Nxi=1|uj(xi)|2.
    $
    (4.1)

    Table 1 indicates which optimizer is selected for each example to minimize the loss function.

    Table 1.  The hyperparameters configured in numerical examples.
    Example Optimizer Learning rate Iterations
    $ 1 $ Adam + L-BFGS 0.001 20000
    $ 2 $ Adam + L-BFGS 0.001 20000
    $ 3 $ Adam + L-BFGS 0.001 20000
    $ 4 $ L-BFGS - -

     | Show Table
    DownLoad: CSV

    We use Python to code our algorithms and all codes run on a Lenovo laptop with AMD R7-6800H CPU @ 3.20 GHz and 16.0GB RAM.

    Here, we solve the following distributed-order sub-diffusion model:

    $ ut+DωtuΔuΔut+G(u)=f(x,t),(x,t)Ω×J,
    $
    (4.2)

    with boundary condition

    $ u(x,t)=0,xΩ,tˉJ,
    $
    (4.3)

    and initial condition

    $ u(x,0)=u0(x),xˉΩ,
    $
    (4.4)

    where the nonlinear term $ G(u) = u^2 $. The symbol $ \Delta $ is the Laplace operator. Based on Eqs (3.14)–(3.21), the loss function $ MSE_{total} $ can be obtained by

    $ MSEtotal=MSE+MSEb,
    $

    where

    $ MSE=1N×NxNj=1Nxi=1|j(xi)|2,MSEb=1N×NbNj=1Nbi=1|vj(xib;δ)0|2.
    $

    Example 1.

    For this example, we set space domain $ \Omega = (0, 1) $ and time interval $ J = (0, \frac{1}{2}] $. The training set consists of the boundary points and $ N_x = 200 $ collocation points randomly selected in the space domain $ \Omega $. Choosing $ \omega(\alpha) = \Gamma(3-\alpha) $ and the source term

    $ f(x,t)=2tsin(2πx)+Γ(3)t(t1)lntsin(2πx)+4t2π2sin(2πx)+8tπ2sin(2πx)+(t2sin(2πx))2,
    $

    then we can obtain the exact solution $ u(x, t) = t^2\sin(2\pi x) $.

    To evaluate the performance of our proposed method, the exact solution and the predicted solution solved by a multi-output neural network that consists of $ 6 $ hidden layers with 40 neurons in each hidden layer are showed in Figure 2.

    Figure 2.  Example 1: the exact solution and predicted solution with $ \Delta \alpha = \frac{1}{500} $, $ \theta = 1 $ and $ N = 20 $ at $ t = 0.5 $.

    The influence of different network structures on our proposed method to solve Example 1 is presented in Table 2. The accuracy of the predicted solution fluctuates with different network architectures, and shows a fluctuating growth with the increase of the number of hidden layers. Based on the three network architectures, Figure 3 shows the behavior performance of the proposed method with gradual decrease of the time step size. With the increase of the number of grid points in the time interval, we observe that the behavior of relative $ L^2 $ error generally presents an upward trend for fixed network architecture and expanding the depth of the hidden layer can effectively improve the accuracy of the predicted solution.

    Table 2.  The relative $ L^2 $ error between the predicted solution with parameters $ N = 20 $, $ \Delta\alpha = \frac{1}{500} $, $ \theta = 1 $ and exact solution for different numbers of hidden layers and different numbers of neurons per layer.
    20 30 40 50 60
    2 3.583482e-02 7.850782e-02 5.238403e-02 1.143600e-01 3.761048e-02
    4 3.815002e-02 5.144618e-02 3.622479e-02 2.317038e-02 3.721057e-02
    6 2.433509e-02 1.216386e-01 2.638224e-02 2.837553e-02 2.682958e-02

     | Show Table
    DownLoad: CSV
    Figure 3.  Example 1: the variation trend of relative $ L^2 $ error between the predicted solution with $ \theta = 1 $, $ \Delta\alpha = \frac{1}{500} $ and the exact solution.

    Numerical results calculated by the single output neural network and multi-output neural network schemes are presented in Table 3, where we select $ 200 $ collocation points in the given spatial domain by random sampling method and set $ N = 10 $, $ \theta = 1 $ and $ \Delta\alpha = \frac{1}{500} $. It is easy to find that the accuracy of the predicted solution calculated by the multi-output neural network scheme is higher than that solved by the single output neural network scheme and replacing single output neural network with multi-output neural network can save a lot of computing time.

    Table 3.  The relative $ L^2 $ error and computing time given by the single output and multi-output neural network schemes.
    Neural network Layers Neurons Relative $ L^2 $ error CPU time (s)
    multi-output 4 20 1.948281e-02 32.80
    40 1.858120e-02 44.45
    6 20 9.724722e-03 51.40
    40 1.024869e-02 65.88
    single output 4 20 8.741336e-02 661.99
    40 2.702814e-02 742.17
    6 20 2.937448e-02 748.31
    40 2.278774e-02 840.09

     | Show Table
    DownLoad: CSV

    Example 2.

    In this numerical example, considering the space domain $ \Omega = (0, 1)\times(0, 1) $, the time interval $ J = (0, \frac{1}{2}] $, $ \omega(\alpha) = \Gamma(3-\alpha) $ and the exact solution $ u(x, y, t) = t^2\sin(2\pi x)\sin(2\pi y) $, the source term can be given by

    $ f(x,y,t)=2tsin(2πx)sin(2πy)+Γ(3)t(t1)lntsin(2πx)sin(2πy)+8t2π2sin(2πx)sin(2πy)+16tπ2sin(2πx)sin(2πy)+(t2sin(2πx)sin(2πy))2.
    $

    The training dataset is shown in Figure 5 and the collocation and boundary points are selected by random sampling method.

    To better illustrate the behavior of the predicted solution, Figure 4 portrays the contour plot of the exact and predicted solutions, where the training set consists of $ 961 $ collocation points and $ 124 $ boundary points selected by the equidistant uniform sampling method and the network architecture consists of 12 hidden layers with 60 neurons per layer.

    Figure 4.  Example 2: the contour plot of the exact (a) and predicted (b) solutions with $ N = 10 $, $ \theta = 1 $ and $ \Delta\alpha = \frac{1}{500} $ at $ t = 0.5 $.
    Figure 5.  Example 2: distribution of the collocation points (a) sampled in the domain $ \Omega $ and the boundary training dataset (b).

    Table 4 shows the impact of depth and width of the network on the accuracy of the predicted solution. In Figure 6, we present the behavior of how relative $ L^2 $ error changes with respect to different grid points $ N $. Combined with Table 4 and Figure 6, we observe that increasing the number of hidden layers or neurons has a positive effect on reducing relative $ L^2 $ error in general.

    Table 4.  The relative $ L^2 $ error between the predicted solution with parameters $ N = 20 $, $ \theta = 1 $, $ \Delta\alpha = \frac{1}{500} $ and the exact solution for different number of hidden layers and neurons per layer.
    20 30 40 50 60
    4 1.155984e-01 1.101123e-01 7.228454e-02 7.787707e-02 4.492646e-02
    6 6.273340e-02 6.088659e-02 9.802474e-02 6.536637e-02 6.245142e-02
    8 6.163051e-02 7.424834e-02 6.844676e-02 5.500003e-02 5.169303e-02

     | Show Table
    DownLoad: CSV
    Figure 6.  Example 2: the variation trend of relative $ L^2 $ error between the predicted solution with $ \theta = 1 $, $ \Delta\alpha = \frac{1}{500} $ and the exact solution.

    The results shown in Table 5 reveal the performance of the single output neural network and multi-output neural network schemes, where we select $ 40 $ boundary points and $ 200 $ collocation points in the given spatial domain by random sampling method and set $ N = 10 $, $ \theta = 1 $ and $ \Delta\alpha = \frac{1}{500} $. One can see that using multi-output neural network effectively improves the precision and reduces the computing time.

    Table 5.  The relative $ L^2 $ error and computing time given by the single output and multi-output neural network schemes.
    Neural network Layers Neurons Relative $ L^2 $ error CPU time (s)
    multi-output 4 20 1.080369e-01 52.84
    40 7.472573e-02 66.05
    6 20 6.654115e-02 72.71
    40 7.937601e-02 93.77
    single output 4 20 3.443264e-01 731.22
    40 7.054495e-01 728.11
    6 20 2.636401e-01 772.08
    40 1.410780e-01 892.94

     | Show Table
    DownLoad: CSV

    Further, we consider the following distributed-order fourth-order sub-diffusion model:

    $ ut+DωtuΔu+Δ2u+G(u)=f(x,t),(x,t)Ω×J,
    $
    (4.5)

    with boundary condition

    $ u(x,t)=Δu(x,t)=0,xΩ,tˉJ,
    $

    and initial condition

    $ u(x,0)=u0(x),xˉΩ,
    $

    where the nonlinear term $ G(x) = u^2 $.

    Similarly, the corresponding loss function $ MSE_{total} $ can be calculated by

    $ MSEtotal=MSE+MSEb,
    $

    where

    $ MSE=1N×NxNj=1Nxi=1|j(xi)|2,MSEb=1N×NbNj=1Nbi=1|vj(xib;δ)0|2+1N×NbNj=1Nbi=1|Δvj(xib;δ)0|2.
    $

    Example 3.

    Here, we define the space-time domain $ \Omega\times J = (0, 1)\times(0, \frac{1}{2}] $. Considering $ \omega(\alpha) = \Gamma(3-\alpha) $ and the source term

    $ f(x,t)=2tsin(πx)+Γ(3)t(t1)lntsin(πx)+t2π2sin(πx)+t2π4sin(πx)+(t2sin(πx))2,
    $
    (4.6)

    the exact solution can be given by $ u(x, t) = t^2\sin(\pi x) $. Similar to Example $ 1 $, we also randomly sample $ 200 $ collocation points in the space domain $ \Omega $.

    In order to conveniently observe the capability of our proposed method, Figure 7 shows the change in the trajectory of the predicted and exact solutions with respect to the space point $ x $, where the parameters are set as $ N = 20 $, $ \Delta\alpha = \frac{1}{500} $, $ \theta = 1 $ and the network consists of $ 6 $ hidden layers with $ 50 $ neurons per layer. Figure 8 portrays the trajectory of relative $ L^2 $ error with three different network architectures. Table 6 shows the impact of expanding depth or width of the network on the accuracy of the predicted solutions. Based on Figure 8 and Table 6, it is easy to observe that increasing the number of hidden layer plays a positive role in improving the accuracy of the predicted solutions.

    Figure 7.  Example 3: the predicted and exact solutions at $ t = 0.5 $.
    Figure 8.  Example 3: the relative $ L^2 $ error between the predicted solution with $ \theta = 1 $, $ \Delta \alpha = \frac{1}{500} $ and exact solution for different number of grid points.
    Table 6.  The relative $ L^2 $ error between the predicted solution with parameters $ N = 20 $, $ \theta = 1 $, $ \Delta\alpha = \frac{1}{500} $ and the exact solution for different number of hidden layers and neurons per layer.
    20 30 40 50 60
    2 8.994465e-03 8.835675e-03 7.976196e-03 1.121301e-02 1.284193e-02
    4 6.698531e-03 6.623791e-03 1.012124e-02 7.129064e-03 6.760145e-03
    6 5.238653e-03 3.178300e-03 4.705057e-03 7.221552e-03 5.002096e-03

     | Show Table
    DownLoad: CSV

    The relative $ L^2 $ error and CPU time obtained by the multi-output neural network and single output neural network schemes are presented in Table 7, where we select $ 200 $ collocation points in the given spatial domain by random sampling method and set $ N = 10 $, $ \theta = 1 $ and $ \Delta\alpha = \frac{1}{500} $. The error of the proposed multi-output neural network scheme is smaller than that of the single output neural network scheme. For this 1D system, the multi-output neural network scheme is more efficient than the single output neural network scheme.

    Table 7.  The relative $ L^2 $ error and computing time given by the single output and multi-output neural network schemes.
    Neural network Layers Neurons Relative $ L^2 $ error CPU time (s)
    multi-output 4 20 8.027377e-03 334.07
    40 8.656712e-03 366.52
    6 20 8.315435e-03 557.78
    40 6.788274e-03 601.19
    single output 4 20 2.412753e-02 994.62
    40 2.365781e-02 1317.16
    6 20 2.428662e-02 1344.76
    40 2.139562e-02 1795.75

     | Show Table
    DownLoad: CSV

    Example 4.

    Now we take space domain $ \Omega = (0, 1)\times(0, 1) $ and time interval $ J = (0, \frac{1}{2}] $. Let $ \omega(\alpha) = \Gamma(3-\alpha) $ and the exact solution $ u(x, y, t) = t^2\sin(\pi x)\sin(\pi y) $. Then we arrive at the source term

    $ f(x,y,t)=2tsin(πx)sin(πy)+Γ(3)t(t1)lntsin(πx)sin(πy)+2t2π2sin(πx)sin(πy)+4t2π4sin(πx)sin(πy)+(t2sin(πx)sin(πy))2.
    $
    (4.7)

    Here, we apply the training data set shown in Figure 5.

    In order to more intuitively demonstrate the feasibility of our proposed method for solving this 2D system, Figure 9 shows the contour plot of $ u $ and $ |u-v| $, where the training set consists of $ 900 $ collocation points and $ 120 $ boundary points selected by the equidistant uniform sampling method and the network is composed of 6 hidden layers with 20 neurons per layer.

    Figure 9.  Example 4: the contour plot of $ u $ and $ |u-v| $ with $ \theta = 1 $, $ \Delta \alpha = \frac{1}{500} $ and $ N = 40 $ at $ t = 0.5 $.

    From the behavior of relative $ L^2 $ error in Figure 10, one can see that the accuracy of the predicted solutions with the fixed network first presents a increasing trend and then gradually presents a downward trend. This is because the approximation ability of neural network reaches saturation with the increase of grid points $ N $. Table 8 shows the relative $ L^2 $ error calculated by different network architectures. On the whole, the relative $ L^2 $ error slightly decreases with expanding depth of the network, while it first slightly decreases and then increases with expanding width of the network. To show the precision and efficiency of the multi-output neural network scheme for this 2D system, the relative $ L^2 $ error and CPU time obtained by the multi-output neural network and single output neural network schemes are shown in Table 9, where we select $ 40 $ boundary points and $ 200 $ collocation points in the given spatial domain by random sampling method and set $ N = 10 $, $ \theta = 1 $ and $ \Delta\alpha = \frac{1}{500} $. It illustrates that the multi-output neural network scheme is more accurate and efficient than the single output neural network scheme.

    Figure 10.  Example 4: the relative $ L^2 $ error between the predicted solution with $ \theta = 1 $, $ \Delta \alpha = \frac{1}{500} $ and the exact solution for different number of grid points.
    Table 8.  The relative $ L^2 $ error between the predicted solution with parameters $ N = 40 $, $ \theta = 1 $, $ \Delta\alpha = \frac{1}{500} $ and the exact solution for different number of hidden layers and neurons per layer.
    } 20 30 40 50 60
    2 7.593424e-02 4.556132e-02 2.926755e-02 5.850593e-02 4.956921e-02
    4 3.716823e-02 2.498269e-02 3.388639e-02 4.073820e-02 5.282780e-02
    6 2.690082e-02 3.039336e-02 2.769315e-02 4.331023e-02 6.456580e-02

     | Show Table
    DownLoad: CSV
    Table 9.  The relative $ L^2 $ error and computing time given by the single output and multi-output neural network schemes.
    Neural network Layers Neurons Relative $ L^2 $ error CPU time (s)
    multi-output 4 20 4.549939e-02 345.03
    40 5.847813e-02 339.46
    6 20 5.680785e-02 466.02
    40 6.110534e-02 528.10
    single output 4 20 1.455785e-01 902.99
    40 1.330922e-01 1309.05
    6 20 1.496581e-01 1469.85
    40 1.322566e-01 1990.96

     | Show Table
    DownLoad: CSV

    In this article, a multi-output physics informed neural network combined with the Crank-Nicolson scheme including the FBN-$ \theta $ method and the composite numerical integral formula was constructed to solve 1D and 2D nonlinear time distributed-order models. The calculation process is described in detail. Numerical experiments are provided to prove the effectiveness and feasibility of our algorithm. Compared with the results calculated by a single output neural network combined with the FBN-$ \theta $ method and the Crank-Nicolson scheme, one can clearly see that the proposed multi-output neural network scheme is more efficient and accurate. Moreover, some numerical methods, such as finite difference or finite element method, need to linearize the nonlinear term, which will give rise to extra costs. The process of linearization can be directly omitted by PINN. Further work will investigate the application of the proposed methodology in high-dimensional problems and practical problems [36,37,38,39,40].

    The authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors would like to thank the editor and all the anonymous referees for their valuable comments, which greatly improved the presentation of the article. This work is supported by the National Natural Science Foundation of China (12061053, 12161063), Natural Science Foundation of Inner Mongolia (2021MS01018), Young innovative talents project of Grassland Talents Project, Program for Innovative Research Team in Universities of Inner Mongolia Autonomous Region (NMGIRT2413, NMGIRT2207), and 2023 Postgraduate Research Innovation Project of Inner Mongolia (S20231026Z).

    The authors declare that they have no conflict of interest.

    [1] Ding T, Schloss PD (2014) Dynamics and associations of microbial community types across the human body. Nature 509: 357–360. doi: 10.1038/nature13178
    [2] Human Microbiome Project C (2012) Structure, function and diversity of the healthy human microbiome. Nature 486: 207–214. doi: 10.1038/nature11234
    [3] Human Microbiome Project C (2012) A framework for human microbiome research. Nature 486: 215–221. doi: 10.1038/nature11209
    [4] Integrative HMPRNC (2014) The Integrative Human Microbiome Project: dynamic analysis of microbiome-host omics profiles during periods of human health and disease. Cell Host Microbe 16: 276–289. doi: 10.1016/j.chom.2014.08.014
    [5] Marchesi JR, Ravel J (2015) The vocabulary of microbiome research: a proposal. Microbiome 3: 31. doi: 10.1186/s40168-015-0094-5
    [6] Kuczynski J, Lauber CL, Walters WA, et al. (2012) Experimental and analytical tools for studying the human microbiome. Nat Rev Genet 13: 47–58.
    [7] Althani AA, Marei HE, Hamdi WS, et al. (2015) Human Microbiome and its Association With Health and Diseases. J Cell Physiol 231: 1688–1694.
    [8] Martin R, Miquel S, Langella P, et al. (2014) The role of metagenomics in understanding the human microbiome in health and disease. Virulence 5: 413–423. doi: 10.4161/viru.27864
    [9] Cox MJ, Cookson WO, Moffatt MF (2013) Sequencing the human microbiome in health and disease. Hum Mol Genet 22: R88–94. doi: 10.1093/hmg/ddt398
    [10] Brooks JP, Edwards DJ, Harwich MD, Jr., et al. (2015) The truth about metagenomics: quantifying and counteracting bias in 16S rRNA studies. BMC Microbiol 15: 66. doi: 10.1186/s12866-015-0351-6
    [11] Abusleme L, Hong BY, Dupuy AK, et al. (2014) Influence of DNA extraction on oral microbial profiles obtained via 16S rRNA gene sequencing. J Oral Microbiol 6.
    [12] Wu GD, Lewis JD, Hoffmann C, et al. (2010) Sampling and pyrosequencing methods for characterizing bacterial communities in the human gut using 16S sequence tags. BMC Microbiol 10: 206. doi: 10.1186/1471-2180-10-206
    [13] Momozawa Y, Deffontaine V, Louis E, et al. (2011) Characterization of bacteria in biopsies of colon and stools by high throughput sequencing of the V2 region of bacterial 16S rRNA gene in human. PLOS ONE 6: e16952. doi: 10.1371/journal.pone.0016952
    [14] Lazarevic V, Gaia N, Girard M, et al. (2013) Comparison of DNA extraction methods in analysis of salivary bacterial communities. PLOS ONE 8: e67699. doi: 10.1371/journal.pone.0067699
    [15] Willner D, Daly J, Whiley D, et al. (2012) Comparison of DNA extraction methods for microbial community profiling with an application to pediatric bronchoalveolar lavage samples. PLOS ONE 7: e34605. doi: 10.1371/journal.pone.0034605
    [16] Bogaert D, De Groot R, Hermans PW (2004) Streptococcus pneumoniae colonisation: the key to pneumococcal disease. Lancet Infect Dis 4: 144–154. doi: 10.1016/S1473-3099(04)00938-7
    [17] Garcia-Rodriguez JA, Fresnadillo Martinez MJ (2002) Dynamics of nasopharyngeal colonization by potential respiratory pathogens. J Antimicrob Chemother 50 Suppl S2: 59–73.
    [18] Biesbroek G, Tsivtsivadze E, Sanders EA, et al. (2014) Early respiratory microbiota composition determines bacterial succession patterns and respiratory health in children. Am J Respir Crit Care Med 190: 1283–1292. doi: 10.1164/rccm.201407-1240OC
    [19] Teo SM, Mok D, Pham K, et al. (2015) The infant nasopharyngeal microbiome impacts severity of lower respiratory infection and risk of asthma development. Cell Host Microbe 17: 704–715. doi: 10.1016/j.chom.2015.03.008
    [20] Feazel LM, Santorico SA, Robertson CE, et al. (2015) Effects of Vaccination with 10-Valent Pneumococcal Non-Typeable Haemophilus influenza Protein D Conjugate Vaccine (PHiD-CV) on the Nasopharyngeal Microbiome of Kenyan Toddlers. PLOS ONE 10: e0128064. doi: 10.1371/journal.pone.0128064
    [21] Prevaes SM, de Winter-de Groot KM, Janssens HM, et al. (2015) Development of the Nasopharyngeal Microbiota in Infants with Cystic Fibrosis. Am J Respir Crit Care Med.
    [22] Cremers AJ, Zomer AL, Gritzfeld JF, et al. (2014) The adult nasopharyngeal microbiome as a determinant of pneumococcal acquisition. Microbiome 2: 44. doi: 10.1186/2049-2618-2-44
    [23] Allen EK, Koeppel AF, Hendley JO, et al. (2014) Characterization of the nasopharyngeal microbiota in health and during rhinovirus challenge. Microbiome 2: 22. doi: 10.1186/2049-2618-2-22
    [24] Biesbroek G, Bosch AA, Wang X, et al. (2014) The impact of breastfeeding on nasopharyngeal microbial communities in infants. Am J Respir Crit Care Med 190: 298–308.
    [25] Sakwinska O, Bastic Schmid V, Berger B, et al. (2014) Nasopharyngeal microbiota in healthy children and pneumonia patients. J Clin Microbiol 52: 1590–1594. doi: 10.1128/JCM.03280-13
    [26] Bassis CM, Tang AL, Young VB, et al. (2014) The nasal cavity microbiota of healthy adults. Microbiome 2: 27. doi: 10.1186/2049-2618-2-27
    [27] Perez-Losada M, Castro-Nallar E, Bendall ML, et al. (2015) Dual Transcriptomic Profiling of Host and Microbiota during Health and Disease in Pediatric Asthma. PLOS ONE 10: e0131819. doi: 10.1371/journal.pone.0131819
    [28] Castro-Nallar E, Shen Y, Freishtat RJ, et al. (2015) Integrating metagenomics and host gene expression to characterize asthma-associated microbial communities. BMC Medical Genomics 8: 50. doi: 10.1186/s12920-015-0121-1
    [29] Bogaert D, Keijser B, Huse S, et al. (2011) Variability and diversity of nasopharyngeal microbiota in children: a metagenomic analysis. PLOS ONE 6: e17035. doi: 10.1371/journal.pone.0017035
    [30] Pérez-Losada M, Crandall KA, Freishtat RJ (2016) Two sampling methods yield distinct microbial signatures in the nasopharynx of asthmatic children. Microbiome[in press].
    [31] Benton AS, Wang Z, Lerner J, et al. (2010) Overcoming heterogeneity in pediatric asthma: tobacco smoke and asthma characteristics within phenotypic clusters in an African American cohort. J Asthma 47: 728–734. doi: 10.3109/02770903.2010.491142
    [32] Kozich JJ, Westcott SL, Baxter NT, et al. (2013) Development of a dual-index sequencing strategy and curation pipeline for analyzing amplicon sequence data on the MiSeq Illumina sequencing platform. Appl Environ Microbiol 79: 5112–5120. doi: 10.1128/AEM.01043-13
    [33] Schloss PD, Westcott SL, Ryabin T, et al. (2009) Introducing mothur: Open-Source, Platform-Independent, Community-Supported Software for Describing and Comparing Microbial Communities. Appl Environ Microbiol 75: 7537–7541. doi: 10.1128/AEM.01541-09
    [34] Schloss PD, Gevers D, Westcott SL (2011) Reducing the effects of PCR amplification and sequencing artifacts on 16S rRNA-based studies. PLOS ONE 6: e27310. doi: 10.1371/journal.pone.0027310
    [35] Edgar RC, Haas BJ, Clemente JC, et al. (2011) UCHIME improves sensitivity and speed of chimera detection. Bioinformatics 27: 2194–2200. doi: 10.1093/bioinformatics/btr381
    [36] Wang Q, Garrity GM, Tiedje JM, et al. (2007) Naive Bayesian classifier for rapid assignment of rRNA sequences into the new bacterial taxonomy. Appl Environ Microbiol 73: 5261–5267. doi: 10.1128/AEM.00062-07
    [37] Caporaso JG, Kuczynski J, Stombaugh J, et al. (2010) QIIME allows analysis of high-throughput community sequencing data. Nat Methods 7: 335–336. doi: 10.1038/nmeth.f.303
    [38] Price MN, Dehal PS, Arkin AP (2010) FastTree 2-Approximately Maximum-Likelihood Trees for Large Alignments. PLOS ONE 5.
    [39] Faith DP (1992) Conservation evaluation and phylogenetic diversity. Biol Conserv 61: 1–10. doi: 10.1016/0006-3207(92)91201-3
    [40] Dixon P (2003) VEGAN, a package of R functions for community ecology. J Veg Sci 14: 927–930. doi: 10.1111/j.1654-1103.2003.tb02228.x
    [41] RStudioTeam (2015) RStudio: Integrated Development for R. RStudio, Inc., Boston, MA URL http://www.rstudio.com/.
    [42] Biesbroek G, Wang X, Keijser BJ, et al. (2014) Seven-valent pneumococcal conjugate vaccine and nasopharyngeal microbiota in healthy children. Emerg Infect Dis 20: 201–210. doi: 10.3201/eid2002.131220
    [43] Yan M, Pamp SJ, Fukuyama J, et al. (2013) Nasal microenvironments and interspecific interactions influence nasal microbiota complexity and S. aureus carriage. Cell Host Microbe 14: 631–640. doi: 10.1016/j.chom.2013.11.005
    [44] Cremers AJH, Zomer AL, Gritzfeld JF, et al. (2014) The adult nasopharyngeal microbiome as a determinant of pneumococcal acquisition. Microbiome 2.
    [45] Bassis CM, Erb-Downward JR, Dickson RP, et al. (2015) Analysis of the upper respiratory tract microbiotas as the source of the lung and gastric microbiotas in healthy individuals. MBio 6: e00037.
    [46] Morgan JL, Darling AE, Eisen JA (2010) Metagenomic sequencing of an in vitro-simulated microbial community. PLOS ONE 5: e10209. doi: 10.1371/journal.pone.0010209
    [47] Yuan S, Cohen DB, Ravel J, et al. (2012) Evaluation of methods for the extraction and purification of DNA from the human microbiome. PLOS ONE 7: e33865. doi: 10.1371/journal.pone.0033865
    [48] P OC, Aguirre de Carcer D, Jones M, et al. (2011) The effects from DNA extraction methods on the evaluation of microbial diversity associated with human colonic tissue. Microb Ecol 61: 353–362. doi: 10.1007/s00248-010-9771-x
    [49] Mackenzie BW, Waite DW, Taylor MW (2015) Evaluating variation in human gut microbiota profiles due to DNA extraction method and inter-subject differences. Front Microbiol 6: 130.
  • This article has been cited by:

    1. Laura Lomba, David Lapeña, Natalia Ros, Elena Aso, Mariachiara Cannavò, Diego Errazquin, Beatriz Giner, Ecotoxicological study of six drugs in Aliivibrio fischeri, Daphnia magna and Raphidocelis subcapitata, 2020, 27, 0944-1344, 9891, 10.1007/s11356-019-07592-8
    2. Laura Lomba, M. Pilar Ribate, Estefanía Zuriaga, Cristina B. García, Beatriz Giner, Acute and subacute effects of drugs in embryos of Danio rerio. QSAR grouping and modelling, 2019, 172, 01476513, 232, 10.1016/j.ecoenv.2019.01.081
    3. Carlos Castillo-Zacarías, Mario E. Barocio, Enrique Hidalgo-Vázquez, Juan Eduardo Sosa-Hernández, Lizeth Parra-Arroyo, Itzel Y. López-Pacheco, Damià Barceló, Hafiz N.M. Iqbal, Roberto Parra-Saldívar, Antidepressant drugs as emerging contaminants: Occurrence in urban and non-urban waters and analytical methods for their detection, 2021, 757, 00489697, 143722, 10.1016/j.scitotenv.2020.143722
    4. Melanie M. Marshall, Kevin E. McCluney, Mixtures of co-occurring chemicals in freshwater systems across the continental US, 2021, 268, 02697491, 115793, 10.1016/j.envpol.2020.115793
    5. Harveer S. Srain, Karen F. Beazley, Tony R. Walker, Pharmaceuticals and personal care products and their sublethal and lethal effects in aquatic organisms, 2021, 29, 1181-8700, 142, 10.1139/er-2020-0054
    6. Kaiwen Hu, Weifeng Li, Weixia Zhang, Kuankuan Yuan, Chenxin Gong, Yang Shu, Yingying Yu, Conghui Shan, Yan Gao, Xunyi Zhang, Haibin Yu, Wei Shi, Guangxu Liu, Diltiazem disrupts Ca2+-homeostasis and exerts immunotoxic effects on a marine bivalve mollusc, the blood clam (Tegillarca granosa), 2025, 217, 0025326X, 118055, 10.1016/j.marpolbul.2025.118055
  • Reader Comments
  • © 2016 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(9840) PDF downloads(1344) Cited by(15)

Figures and Tables

Figures(5)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog