Review Special Issues

Non-autonomous consequences of cell death and other perks of being metazoan

  • Received: 10 November 2014 Accepted: 26 January 2015 Published: 05 February 2015
  • Drosophila melanogaster remains a foremost genetic model to study basic cell biological processes in the context of multi-cellular development. In such context, the behavior of one cell can influence another. Non-autonomous signaling among cells occurs throughout metazoan development and disease, and is too vast to be covered by a single review. I will focus here on non-autonomous signaling events that occur in response to cell death in the larval epithelia and affect the life-death decision of surviving cells. I will summarize the use of Drosophila to study cell death-induced proliferation, apoptosis-induced apoptosis, and apoptosis-induced survival signaling. Key insights from Drosophila will be discussed in the context of analogous processes in mammalian development and cancer biology.

    Citation: Tin Tin Su. Non-autonomous consequences of cell death and other perks of being metazoan[J]. AIMS Genetics, 2015, 2(1): 54-69. doi: 10.3934/genet.2015.1.54

    Related Papers:

    [1] Nur Laylah, S. Salengke, Amran Laga, Supratomo Supratomo . Effects of the maturity level and pod conditioning period of cocoa pods on the changes of physicochemical properties of the beans of Sulawesi 2 (S2) cocoa clone. AIMS Agriculture and Food, 2023, 8(2): 615-636. doi: 10.3934/agrfood.2023034
    [2] Panagiota Kazai, Christos Noulas, Ebrahim Khah, Dimitrios Vlachostergios . Yield and seed quality parameters of common bean cultivars grown under water and heat stress field conditions. AIMS Agriculture and Food, 2019, 4(2): 285-302. doi: 10.3934/agrfood.2019.2.285
    [3] Abdul Rahim Thaha, Umrah Umrah, Asrul Asrul, Abdul Rahim, Fajra Fajra, Nurzakia Nurzakia . The role of local isolates of Trichoderma sp. as a decomposer in the substrate of cacao pod rind (Theobroma cacao L.). AIMS Agriculture and Food, 2020, 5(4): 825-834. doi: 10.3934/agrfood.2020.4.825
    [4] Pham Thi Thu Ha, Nguyen Thi Bao Tran, Nguyen Thi Ngoc Tram, Vo Hoang Kha . Total phenolic, total flavonoid contents and antioxidant potential of Common Bean (Phaseolus vulgaris L.) in Vietnam. AIMS Agriculture and Food, 2020, 5(4): 635-648. doi: 10.3934/agrfood.2020.4.635
    [5] Hamid El Bilali, Lawali Dambo, Jacques Nanema, Imaël Henri Nestor Bassole, Generosa Calabrese . Biodiversity-pastoralism nexus in West Africa. AIMS Agriculture and Food, 2022, 7(1): 73-95. doi: 10.3934/agrfood.2022005
    [6] Agatha Ifeoma Atugwu, Uchechukwu Paschal Chukwudi, Emmanuel Ikechukwu Eze, Maureen Ogonna Ugwu, Jacob Ikechukwu Enyi . Growth and yield attributes of cowpea accessions grown under different soil amendments in a derived Savannah zone. AIMS Agriculture and Food, 2023, 8(4): 932-943. doi: 10.3934/agrfood.2023049
    [7] Babatope Samuel Ajayo, Baffour Badu-Apraku, Morakinyo A. B. Fakorede, Richard O. Akinwale . Plant density and nitrogen responses of maize hybrids in diverse agroecologies of west and central Africa. AIMS Agriculture and Food, 2021, 6(1): 381-400. doi: 10.3934/agrfood.2021023
    [8] Fitri Damayanti, Salprima Yudha S, Aswin Falahudin . Oil palm leaf ash's effect on the growth and yield of Chinese cabbage (Brassica rapa L.). AIMS Agriculture and Food, 2023, 8(2): 553-565. doi: 10.3934/agrfood.2023030
    [9] Reni Lestari, Mahat Magandhi, Arief Noor Rachmadiyanto, Kartika Ning Tyas, Enggal Primananda, Iin Pertiwi Amin Husaini, Frisca Damayanti, Rizmoon Nurul Zulkarnaen, Hendra Helmanto, Reza Ramdan Rivai, Hakim Kurniawan, Masaru Kobayashi . Genetic characterization of Indonesian sorghum landraces (Sorghum bicolor (L.) Moench) for yield traits. AIMS Agriculture and Food, 2024, 9(1): 129-147. doi: 10.3934/agrfood.2024008
    [10] Patrick A. Blamo Jr, Hong Ngoc Thuy Pham, The Han Nguyen . Maximising phenolic compounds and antioxidant capacity from Laurencia intermedia using ultrasound-assisted extraction. AIMS Agriculture and Food, 2021, 6(1): 32-48. doi: 10.3934/agrfood.2021003
  • Drosophila melanogaster remains a foremost genetic model to study basic cell biological processes in the context of multi-cellular development. In such context, the behavior of one cell can influence another. Non-autonomous signaling among cells occurs throughout metazoan development and disease, and is too vast to be covered by a single review. I will focus here on non-autonomous signaling events that occur in response to cell death in the larval epithelia and affect the life-death decision of surviving cells. I will summarize the use of Drosophila to study cell death-induced proliferation, apoptosis-induced apoptosis, and apoptosis-induced survival signaling. Key insights from Drosophila will be discussed in the context of analogous processes in mammalian development and cancer biology.


    Nowadays, we receive a variety of information every day. How to make better use of information that changes so quickly is something we deserve to think about. The development of computer science technology has promoted the popularity of knowledge graphs (KGs)[1,2,3], which is a semantic network[4] that reveals and describes the relationships between entities in the real world. In this paper, a knowledge graph means triples. Entities are the most basic elements in a KG, and different entities may have different relationships. Let $ \mathbf E $ represent a set of entities, $ \mathbf R $ represent a set of relationships between entities. A knowledge graph can be expressed as a collection of a number of triples:

    $ KG={(h,r,t)|h,tE,rR}
    $
    (1.1)

    where $ h $ represents the head entity, $ t $ represents the tail entity, and $ r $ denotes the relationship which is used to connect the head entity and the tail entity, i.e., $ < Obama, Place\_of\_Birth, Honolulu > $. In essence, KGs are multi-relational graphs composed of entities (nodes) and relationships(edges). Each edge consists of a head entity, a relationship, and a tail entity. The graph structures and large volume often make KGs useful for regular users to access valuable information. There are many widely used knowledge bases on the Internet, such as Freebase[5], Wikidata[6] and DBpedia [7]. When applying knowledge graphs to a natural language task, the correctness and coverage determine its contributions to the task. Common natural language processing (NLP)[8] tasks, such as Question-answering systems and information retrieval tasks, often require a KGs system as support. However, tasks based on KGs are often affected by incompleteness. Incompleteness in this paper means that a triple misses an entity or a relationship. Therefore, it is necessary to study Knowledge Graph Completion (KGC) [9,10,11] methods to complete missing information with the purpose of improving the quality of KGs and the performance of real-world applications. KGC methods mainly use structured knowledge to infer factual information which is already covered in a KG. Table 1 shows that even the ultra-large-scale knowledge graphs still lack a lot of important information.

    Table 1.  Statistics of missing type and quantity.
    Dataset Missing type Quantity
    Freebase a place of birth 71%
    Freebase nationality 75%
    Freebase parents 94%
    DBpedia a place of birth 60%
    DBpedia known for 58%

     | Show Table
    DownLoad: CSV

    In general, KGC tasks can be divided into two subtasks: Link prediction [12] and Entity prediction [13]. Link prediction aims to automatically complete the missing relationships in the knowledge graphs. Entity prediction aims to automatically complete the missing entities in the knowledge graphs. This article focuses on entity prediction tasks. The benefit of knowledge graphs embedding(KGE)[14] methods in a variety of practical applications stimulates us to explore its promising use in solving the KGC problems. Extensive research [15] has been done on KGC to fill in missing entities or relationships by KGE methods. The KGE methods embed the entities and relationships of the knowledge graphs into the continuous vector space and then complete some downstream KGs applications such as KGC. Most existing KGE technologies perform entity prediction tasks based on observed facts. An entity prediction task based on the KGE method first represents the entity and the relationship as the form of the vectors, i.e., the real value vectors, and then defines the scoring function to measure the plausibility of triples. The ultimate goal is to maximize the total plausibility of triples. The embedding vectors of entities and relationships are used for the interaction of components of given triples. Taking the model ProjE [16] as an example, before using the scoring function, the model simply uses diagonal matrices to combine entities and relationships. The refactoring formula is as follows:

    $ er=Dee+Drr
    $
    (1.2)

    where $ \mathbf{D}_{e} $ and $ \mathbf{D}_{r} $ are the specific matrix combination operators. This combination method ignores the interaction between entities and relationships in the same embedding space, resulting in insufficient interaction. To bridge this gap, we explore how to handle the problem of insufficient interaction and use a simple and efficient neural network to perform KGC tasks.

    Inspired by the above observations, this paper proposes a new entity prediction model called FRS(Feature Refactoring Scoring) based on the idea of shared parameters and knowledge graphs embedding. The main characteristics of the proposed approach are:

    1. Instead of requiring a prerequisite or a pretraining process, FRS is a self-sufficient model over the length-1 relationship and doesn't occupy expensive multi-hop paths through the knowledge graphs.

    2. FRS innovatively introduces feature engineering methods in the knowledge graphs completion models. In the feature processing layer, the alignment between entities and relationships in the same feature space is realized by using the idea of shared parameters neural network. The experiment proves that the feature processing layer alleviates the problem of insufficient interaction and provides a new idea for the KGC task.

    3. Through extensive experiments of FRS, we find that the effect of embedding size and negative candidate sampling probability on experimental results is in reverse. It contributes to the entity predictive model's fine-tuning work.

    4. Unlike the entity prediction model with complex networks, FRS can be regarded as a simple three-layer neural network entity prediction and outperforms state-of-the-art KGC methods.

    This section introduces some of the basic concepts, definitions, and a few abbreviations in this article. In addition, we review different types of entity prediction algorithms and their score functions.

    $ \mathbf{Notations}: $ Throughout this paper, we use uppercase bold letters to represent matrices(e.g., $ \mathbf{M} $, $ \mathbf{W} $) and lowercase bold letters to represent vectors(e.g., $ \mathbf{h} $, $ \mathbf{r} $, $ \mathbf{t} $). $ \|\mathbf{x}\|_{1 / 2} $ denotes either the $ \ell_{1} $ norm or $ \ell_{2} $ norm. Let tanh(x) the hyperbolic tangent function, and sigmoid(x) the sigmoid function. The score function is used to measure the triples plausibility.

    Definition 1. (Knowledge Graphs Embedding): a method of knowledge representation learning which embeds entities and relationships of knowledge graphs into continuous vector spaces.

    Definition 2. (Entity Prediction): given two $ < r, t > $ or $ < h, r > $ as input, we consider the entity prediction as a ranking scoring problem where the top-k candidates in the scoring list are prediction results. The output list should follow the rules in this task, when outputting any two tail entities $ e_i $, $ e_j $, if $ < h, r, e_i > $ exists in KGs, $ < h, r, e_j > $ does not exist in KGs, then $ e_j $ should be placed after $ e_i $.

    We summarize the mentioned abbreviations and concepts in this paper in Table 2.

    Table 2.  The important symbols and their definitions.
    Abbreviations or Concepts Definitions or Explanations
    E a set of entities
    R a set of relationships
    $ < h, r, t > $ a triple, i.e., $ < head entity, relationship, tail entity > $
    KG a knowledge graph
    KGE knowledge graphs embedding
    KGC knowledge graphs completion
    FRS Feature Refactoring Scoring
    KRL knowledge representation learning

     | Show Table
    DownLoad: CSV

    Entity prediction algorithms can be categorized into distance models and similarity matching models. The main difference is that different models use different scoring functions. The distance model uses a distance-based scoring function, while the similarity matching model uses a similarity-based semantic matching scoring function.

    The distance model usually uses a distance-based scoring function to measure the probability of triples. Unstructured model (UM) [17] and structured embedding (SE) [18] are early stage models. Their structure is simple and can achieve good prediction results.

    UM[17] treats triples with only a single relationship and all relationships set to zero vectors $ \mathbf { r } = \mathbf { 0 } $. so the scoring function is

    $ fr(h,t)=ht22
    $
    (2.1)

    The model is simple and easy to expand, however, this method does not distinguish well between different relationships in KGs. In order to solve this problem, SE [18] learns relationships specific matrices for entities. The score function can be defined as:

    $ fr(h,t)=Mr,1hMr,2t1
    $
    (2.2)

    where specific matrices $ \mathbf{M}_{r, 1} $, $ \mathbf{M}_{r, 2} \in \mathbb{R}^{d \times d} $. The SE transforms vectors $ \mathbf h $ and $ \mathbf t $ of entities through the corresponding specific matrices of the relationships $ \mathbf r $ and then measures their similarity in the transformation space, which reflects the semantic correlation of the head entities to tail entities in the relationship space. The SE model has a significant drawback: it uses two different matrices for the head and tail entities to project, and the coordination is poor. It is often impossible to accurately describe the semantic relationship between two entities and the relationship. Although the UM model and the SE model have a simple model structure, they have achieved good entity prediction results in the early stage, but they are not effective in the scenarios of the relationship directly related.

    With the development of distributed representation, researchers find that word vectors sourcing from the word2vec[19] algorithm can capture invisible semantic information between words and words, as shown in the following:

    $ vTokyovJapanvBerlinvGermany
    $
    (2.3)

    Since the triple data has obvious relationship information, relationship $ r $ can be interpreted as the translation of $ h $ to $ t $. This is the main idea of the most representative distance model TransE[20]. TransE is inspired by translation invariance in the word vector space, where the relationship between words usually corresponds to translations in potential feature spaces. TransE wants that $ h+r \approx t $ when measuring the plausibility of a triple. The score function can be defined as:

    $ fr(h,t)=h+rt1/2
    $
    (2.4)

    where $ f_{r}(h, t) $ can be two options:$ L_{1}-norm $ or $ L_{2}-norm $. Compared with the previous models, TransE has fewer parameters and low computational complexity, and can directly establish complex semantic relations between entities and relationships. However, due to the simplicity of TransE, it cannot handle complex relationships modelling. TransH [21] alleviates the problem of TransE in building complex relationships. TransH projects $ \mathbf h $ and $ \mathbf t $ by a specific relationship matrix. The score function can be defined as:

    $ fr(h,t)=(hwrhwr)+dr(twrtwr)22
    $
    (2.5)

    where $ \mathbf{w}_{\boldsymbol{r}} $ is the relationship specific hyperplane, $ \mathbf{d}_{\boldsymbol{r}} $ is the relationship translation vector. These proposed models performed well for the one-to-many relationships. The TransH model makes entities have different representations of different relationships, and it simply assumes that entities and relationships can be represented in a single semantic space. However, this simple assumption leads to inaccurate modeling of entities and relationships by TransH.

    The similarity matching model usually uses a similarity-based semantic matching scoring function. They measure the plausibility of triples by matching the underlying semantic information of the entity and the relationships embodied in its vector space representation.

    RESCAL [22] is based on three-dimensional tensor factorization. The scores function can be defined as follows:

    $ s(h,r,t)=hMrt
    $
    (2.6)

    where $ \mathbf{M}_{\mathbf{r}} $ is a $ k \times k $ relationship matrix. RESCAL learns the embedding of entities and relationships by minimizing the tensor reconstruction error and then completes the KGs using the scores of the reconstructed tensor. Neural Tensor Network(NTN) [23] replaces the linear transformation layer in traditional neural networks with bilinear tensors and links the head entities and tail entities vectors in different dimensions. The score function can be defined as:

    $ fr(h,t)=urtanh(hMrt+Mr,1h+Mr,2t+br)
    $
    (2.7)

    where $ \mathbf{M}_{r} \in \mathbb{R}^{d \times d \times k} $ denotes a tensor, $ \mathbf{M}_{r, 1}, \mathbf{M}_{r, 2} \in \mathbb{R}^{k \times d} $ are weight matrices, and $ \mathbf{u}_{r} $ is the relationship vector. NTN considers second-order correlation by introducing tensors to extend the single-layer neural network model. However, due to the parameter and the number of complexity of this model, it is difficult to deal with large-scale KGs. ProjE [16] is designed to fill the missing information in KGs by a shared variable neural network. They reported that ProjE has a small parameter size and performs well on standard datasets. ProjE defines its function as:

    $ h(e,r)=sigmoid(Wctanh(er)+bp)
    $
    (2.8)

    where $ \mathbf{W}^{c} $ is the candidate matrix.

    $ er=Dee+Drr+bc
    $
    (2.9)

    where $ \mathbf{D}_{e} $ and $ \mathbf{D}_{r} $ are diagonal matrices. However, the enough feature processing for the triple data is overlooked before using specific matrix combination operators. SENN [24] integrates the prediction tasks of head entities, relationships and tail entities into a neural network-based framework. The score function of $ head\_pred(r, t) $ is defined as:

    $ s(r,t)=vhAE=f(f(f([r;t]Wh,1+bh,1))Wh,n+bh,n)AE
    $
    (2.10)

    where $ f $ is an activate function, $ n $ represents the number of neural network layers, $ \mathbf W $ is the weight matrices, and $ \mathbf b $ is the bias item. The result of relation prediction and entity prediction is improved. However, based on a given triple, SENN needs to calculate the head entity prediction label vector and the tail entity prediction label vector separately, which is more complicated than the traditional model that only uses a single prediction label vector to complete the entity prediction task.

    The similarity matching models tend to use simple matrix operators to combine entities and relationships, which is effective, and existing models such as ProjE [16] and RESCAL [22] can be proved. However, the importance of feature processing for the triple data is overlooked. Feature processing is mainly used to enhance the interaction between triples, i.e., entities and entities, relationships and relationships, entities and relationships. Since the similarity matching model is based on the potential semantic similarity, the interaction between the triple data is the key of the entity prediction model. The interaction of this paper relies on feature engineering. There are usually two methods for feature engineering: one is manual processing, which requires more human intervention, and requires a deep understanding of the tasks involved to build better features. This method is feasible, but for more complex tasks, it takes a lot of manpower and resources to construct the appropriate features. Another method is knowledge representation learning [25,26,27], which automatically learns new representations from the data directly through machine learning algorithms, and is able to learn the appropriate features based on specific tasks. In this paper, a variant of feedforward neural networks is used as a feature engineering method to alleviate the problem of insufficient interaction.

    We start with the basic feedforward network model, which has only one neuron. A feedforward neural network is a basic network in which all nodes are organized into successive layers, each node receiving input from nodes in the earlier layers. Given an input ($ x_{i} $), the output can be defined as:

    $ y=f(ni=1Wixi+b)
    $
    (3.1)

    Where the parameter $ W_{i} $ is used to fit the data, $ b $ is a bias term, and f is the activation function. In a feedforward network, the chain structure is the interaction between layers, and the number of layers represents the depth of the network. This complex mapping can be seen as the interaction of feature information. Thus, we introduce a variant of the feedforward network, which is based on the shared parameters and residual [28], specifically designed for feature representation of entities and relationships. The structure of the F network is shown in Figure 1.

    Figure 1.  The structure of F network. Where $ n $ is the number of layers, $ x_{i} $ is the input of F network, and $ F{^{(n)} (x_{i})} $ is the output of F network.

    Given the input ($ x_{i} $), the output of the $ n^ { \mathrm { th } } $ layer $ F{^{(n)} (x_{i})} $ can be defined as:

    $ F(n)(xi)=σ(F(xi)(n1)W)+xi
    $
    (3.2)

    where $ n $ is the number of layers, $ \sigma $ denotes that there is a mapping relation between the input data and the shared parameter $ W $, and $ x_{i} $ is a residual item. According to the structure of Figure 3 and Equation (3.2), we can get:

    $ F(1)(xi)=f(xiW)+xi
    $
    (3.3)
    $ F(2)(xi)=f(f(xiW)W)+xi
    $
    (3.4)
    Figure 3.  The effect of probability for negative candidate sampling. P refers to the ProjE model, and F refers to the FRS model.

    where $ f $ represents activation functions. The main identity of the F network is that it uses the idea of shared parameters and residuals to perform feature processing on the input data.

    This section mainly introduces the details of the FRS model proposed in this paper and briefly gives the loss function and the negative sample sampling method.

    A typical entity prediction model usually consists of two steps:

    step 1) Representation of entities and relationships.

    step 2) Definition of similarity scoring function.

    The first step is to embed entities and relationships into successive low-dimensional vector spaces. In the second step, the scoring function is to measure the plausibility of the triples. Based on the characteristics of the typical entity prediction model, this paper proposes a new neural network model where the entities and relationships are modeled by a three-layer structure with a feature processing layer, a refactoring layer, and a candidate prediction layer. The feature processing layer and the refactoring layer are used for the representation of entities and relationships, and then the candidate prediction layer is used for the measure of the similarity function. It is in line with the construction steps of a typical entity prediction model. The explanation of FRS is as follows: given two embeddings as input, we consider the entity prediction as a ranking scoring problem where the top-k candidates in the scoring list are prediction results. To get this score list, we rank every possible candidate on a refactoring operator defined by two input embeddings through the specific feature engineering method. Figure 2 takes the tail entity prediction task as an example. The input data is $ < Leonardo da Vinci, Nationality, ? > $, and the candidate entities are $ < Italy > $ and $ < U.S.A > $. The yellow nodes and brown nodes are row vectors, from the entity embedding matrix, and the green nodes are row vectors, from the relationships embedding matrix. It is worth noting that the three-layer network in this paper adopts the idea of shared parameters. Shared parameters are used in this paper to reduce training parameters and alleviate insufficient interaction.

    Figure 2.  FRS architecture for entity prediction. Take the tail entity prediction task as an example. The input is $ < Leonardo da Vinci, Nationality, ? > $. The two tail candidate entities are $ < Italy > $ and $ < U.S.A > $. The FRS model can be seen as a three-layer neural network structure with a feature processing layer, a refactoring layer, and a candidate prediction layer. The model finally outputs a list of scores of candidate entities. The yellow node represents the head entity, the green node represents the relationship, and the brown node represents the tail entity.

    Recent models have shown that specific matrix combination operators can refactor entities and relationships such as HoIE [29] and ProjE [16]. However, the importance of feature processing for the triple data is overlooked before using specific matrix combination operators. Insufficient interaction may limit the performance of the KGC model, so it is necessary to perform feature processing on the input data. To solve the insufficient interaction problem, the feature processing layer is proposed based on the F network. Intuitively, multi-layer F networks can more easily learn abstract and general feature information. As the training parameters increase layer by layer, we choose the two-layer F network as the feature processing layer. It can be defined as follows:

    $ F(e)=f[f(eW)W]+e
    $
    (4.1)
    $ F(r)=f[f(rW)W]+r
    $
    (4.2)

    where $ f $ represents an activation function we use in training process, $ \mathbf W $ is a $ k \times k $ square matrix which represents feature processing weight. In this layer, entities and relationships share the same weight matrix $ \mathbf W $, while adding the residual items $ \mathbf e $ and $ \mathbf r $.

    From the perspective of the interaction, entities and relationships are aligned by the shared parameter $ \mathbf W $. This alignment can be seen as some shared attributes that entities and relationships have after passing through the same embedding space. The shared parameter $ \mathbf W $ can be regarded as an embedding space, and the entities and the relationships complete the interaction through the same embedding space. It can be summarized as the feature processing layer uses shared parameters to complete implicit interactions between entities and relationships.

    The middle layer is the refactoring layer. Similar to most KGE models, this layer uses some specific matrix combination operators to combine entities and relationships. It can be defined as:

    $ R(e,r)=[F(e)+F(r)]V
    $
    (4.3)

    where $ \mathbf V $ is a $ k \times k $ refactoring weights matrix. The refactoring layer explicitly performs the interaction between the entities and the relationships by the shared parameter. Explicit interaction refers to the use of shared parameter $ \mathbf V $to refactor entities and relationships while completing the interaction.

    The third layer is the candidate prediction layer, which is the output layer. The candidate prediction layer is based on the fact that the feature representation can capture the semantic similarity of triple data in the knowledge graphs. Therefore, by operating on the results of the output of the first two layers of the FRS, an embedding representation similar to the predicted result is obtained, and then, the similarity calculation is performed with the real candidate entities. We can define the candidate prediction process as:

    $ S(e,r)=f[f(R(e,r))C+b]
    $
    (4.4)

    where $ \mathbf C $ is a $ { s \times k } $ candidate entities matrix, $ s $ denotes the number of candidate entities, $ k $ denotes embedding size, $ f $ represents activation functions, and $ b $ is the candidate prediction bias. Since $ s $ comes from the entity set $ \mathbf E $, and the FRS model structure is shared parameters, no additional variables are needed. This layer obtains the final prediction results with a scoring list.

    In the algorithm of this paper, although the set of candidate entities does not increase the number of parameters because of entity variables sharing, if the model uses all the sets of entities to train each time, it will lead to a huge amount of computation. Therefore, the candidate sampling method should be used to reduce the size of the candidate entity set $ C $. We use the rules of word2vec[19] to sample the candidate set negatively. It can be described as that the set of candidate entities used for training consists of a set of entities in all positive cases and a set of entities in a part of negative cases. We can simply use the binomial distribution $ B (1, P y) $ to indicate whether an entity in a negative instance is selected. $ Py $ indicates the probability that the negative case is selected, and $ 1-Py $ indicates the probability that the negative case is not selected. To learn the representation of entities and relationships, we need a loss function to maximize the plausibility of triples. For a triple and a binary label vector, we can obtain the candidate prediction result from positive candidates and negative candidates. For all the entities, we apply a binary label vector to them which means entities in $ \mathbf { E } _ { - } $ get 0 score and entities in $ \mathbf { E } _ { + } $ get 1 score. To fulfill the goal of maximizing the connection between the binary label vector and candidate prediction results, we define the loss function in a similar way:

    $ L(e,r,y)=i{i|yi=1}log(S(e,r)i)sEjPnlog(1S(e,r)j)
    $
    (4.5)

    In the binary label vector, $ y\in \mathbf C $, $ y _ { i } = 1 $ indicates a positive lable; $ s $ denotes the number of negative samples originated from $ \mathbb { E } _ { j \sim P _ { n } } $. According to all the settings, the ranking score of the $ i ^ { \mathrm { th } } $ candidate entity is:

    $ S(e,r)i=sigmoid[tanh(R(e,r))C[i,:]+b]
    $
    (4.6)
    $ R(e,r)=[F(e)+F(r)]V
    $
    (4.7)
    $ F(e)=sigmoid[sigmoid(eW)W]+e
    $
    (4.8)
    $ F(r)=sigmoid[sigmoid(rW)W]+r
    $
    (4.9)

    We use the following two evaluation metrics as the basis for the assessment: the average ranking of all correct entities (Mean Rank) and the correct entities that appear within the top-k elements (Hitsk).ThesetwoevaluationmetricsarecalledRawbecausethetargetpredictionresultsarenotconsideredintheevaluationprocess.Ifwetakethetargetpredictionresultsintoconsideration,thesetwoevaluationmetricsbecomeFiltered.Forexample,whentheinputtripleis$<Italy,Contained_by,?>$andthetargetentitypredictionresultis$Toscana$.Intheentitypredictiontask,wewillgetatop2list:$Florence$and$Toscana$.TheRawMeanRankandHits1 would be 2 and 0 respectively. The Filtered Mean Rank and the Filtered Hits@1 would both be 1. It only because the setting of Filtered ignores the other candidate entities although these are correct entities.

    For experiments with FRS, the omitted parameter settings of Adam are as follows: $ \beta _ { 1 } $ = 0.9, $ \beta _ { 2 } $ = 0.999 and 𝝐 = $ 1 e ^ { - 8 } $. The number of epoch for all experiments in this paper is 100, and the initialization of all parameters is derived from a normalized distribution $ U \left[- \frac { 6 } { \sqrt { k } }, \frac { 6 } { \sqrt { k } } \right] $. We used the hyperparameter settings are as follows: dropout probability $ p _ { d } $ = 0.5, negative sampling probability $ p _ { n } $ = 0.5, embedding size $ k $ = 200, minibatch $ b $ = 200. We manually set the learning rate change operator by using the idea of learning rate decay in the process of model learning. The learning rate initial value is 0.01.

    This subsection will focus on the following three issues: First, what is the main similarity and difference between the FRS and other KGC models? Second, how effective FRS is compared to other KGC models under traditional experimental settings and the same benchmark? Third, what is the contribution and limitation of the FRS model proposed in this paper? In the table, MR and FMR refer to Mean Rank and Filtered Mean Rank respectively. The capital letter F indicates the Filtered results. The two tables contain different models because the experimental results in this paper are all selected from the original papers. The original paper may only experiment with one dataset, either FB15K or WN18, and the results in the table strictly follow the results of the original paper. Table 3 shows the statistics of FB15K and WN18. Table 4 shows the evaluation results on FB15K. Table 5 shows the evaluation results on WN18.

    Table 3.  Statistics of the experimental datasets.
    Dataset Entity Relationship Training Valid Test
    FB15K 14951 1345 483142 50000 59071
    WN18 40943 18 141442 5000 5000

     | Show Table
    DownLoad: CSV
    Table 4.  Entity prediction results of different models on FB15K.
    Model MR FMR Hits@10(%) FHits@10(%)
    Unstructed [17] 1074 979 4.5 6.3
    SE [18] 273 162 39.8 28.8
    TransE [20] 243 125 34.9 47.1
    TransH [21] 212 87 45.7 64.4
    TransR [30] 198 77 48.2 68.7
    TEKE_H [31] 212 108 51.2 73.0
    KG2E [32] 174 59 48.9 74.0
    TransD [33] 194 91 53.4 77.3
    lppTransD [34] 195 78 53.0 78.7
    SSP [35] 163 82 57.2 79.0
    TranSparse [36] 187 82 53.5 79.5
    TransG [37] 203 98 52.8 79.8
    TranSparse-DT [38] 188 79 53.9 80.2
    PTransE-RNN [39] 242 92 50.6 82.2
    PTransE-ADD [39] 207 58 51.4 84.6
    ProjE_pointwise [16] 174 104 56.5 86.6
    FRS(our) 110.8 41.9 58.8 89.1

     | Show Table
    DownLoad: CSV
    Table 5.  Entity prediction results of different models on WN18.
    Model MR FMR Hits@10(%) FHits@10(%)
    Unstructed [17] 315 304 35.3 38.2
    SE [18] 1011 985 68.5 80.5
    TransE [20] 263 251 75.4 89.2
    TransH [21] 401 303 73.0 86.7
    TransR [30] 238 225 79.8 92.0
    KG2E [32] 342 331 80.2 92.8
    TEKE_H [31] 127 114 80.3 92.9
    TransD [33] 224 212 79.6 92.2
    SSP [35] 168 156 81.2 93.2
    TranSparse [36] 223 211 80.1 93.2
    TransG [37] 483 470 81.4 93.3
    lppTransD [34] 283 270 80.5 94.3
    TranSparse-DT [38] 234 221 81.4 94.3
    FRS(our) 112.9 103.8 85.4 97.2

     | Show Table
    DownLoad: CSV

    We discuss the performance in detail to show more insights about FRS. The results in the table are sorted in ascending order according to FHits@10. Except for the FRS model in the table 4 and table 5, the rest of the models are derived from the original published results. Since there is no special segmentation for head entity prediction and tail entity prediction in the literature, the model results in this paper are also a set of results with a better selection. All KGC models including FRS use low-dimensional embedding vectors to represent entities and relationships in the knowledge graphs. The model used in this paper differs from the other models in the table in that FRS innovatively introduces feature engineering into the knowledge graphs completion tasks and proposes a subtle feature processing method called F network. As can be seen from Table 4 and Table 5, Hits10isontherisewhiletheMeanRankisonthedecline.SincetheMeanRankisalwaysgreaterthanorequalto1andtheHits10 score is always between 0.0 and 1.0, a lower Mean Rank and a higher Hits@10 score indicate better entity predictive performance. The model's performance measured by these metrics is even more obvious on the WN18 dataset. This may be due to the fact that there are fewer relationship types of fact triples in the WN18 dataset, affecting the learning ability of the model. Although some models can only succeed in partial evaluation protocols, FRS achieves the best performance in the four evaluation protocols of FB15K and WN18 respectively. This verifies the idea that sufficient interaction should be implemented in KGC models. The FRS model alleviates the problem of insufficient interaction in KGC tasks and has achieved the best prediction results without introducing redundant parameters and variables because of the usage of shared parameters.

    HitsKmeasuresifcorrectentitiesappearwithinthetopkelements.ThehighertheHitsK, the performance of the entity prediction will be better. To better demonstrate the performance of the FRS model in terms of Hits@K, we report the experimental results compared to the representative baseline methods for fine-grained evaluation indicators in Table 6. As can be seen from Table 6, FRS consistently outperforms all baselines in terms of three indicators, indicating that the algorithm can further improve performance due to its effectiveness and superiority of FRS. In the comparison of Hits1,Hits3, and Hits@10, they were 3.8%, 2.7% and 2.1% higher, respectively, than the best results of baseline methods.

    Table 6.  Experimental results of entity prediction in terms of Hits@{1, 3, 10} on FB15K.
    Model Hits@1 Hits@3 Hits@10
    TransR [30] 21.8 40.4 58.2
    RESCAL [22] 23.5 40.9 58.7
    TransE [20] 29.7 57.8 74.9
    HoIE [29] 40.2 61.3 73.9
    CompIEx [40] 59.9 75.9 84.0
    ProjE [16] 72.1 81.0 86.6
    SENN [24] 65.9 79.2 87.0
    FRS(our) 75.9 83.7 89.1

     | Show Table
    DownLoad: CSV

    For our model, there are two pivotal hyperparameters which have an influence on the evaluation results: the embedding size $ k $ and the probability of negative candidate sampling $ p _ {n} $. In this section, we focus on the impact of these two hyperparameters on the performance of the model according to the control variable method. We choose the KGE model named ProjE [16] for the analysis of hyperparameter effects. Because FRS simply uses a specific matrix operation to combine entities and relationships before introducing feature engineering, this is consistent with the idea in ProjE. The FB15K dataset contains more categories of relationships, it is more helpful for objective analysis of the model and the impact of parameters on the model, so the experiments in this section are designed for FB15K. Figure 3 provides the effect of embedding size on FB15K. Figure 4 provides the effect of the negative candidate sampling probability on FB15K.

    Figure 4.  The effect of probability for negative candidate sampling. P refers to the ProjE model, and F refers to the FRS model.

    It can be seen from Figure 3(a) that both MR and FMR show a downward trend with the increase of the embedding size $ k $, and the FRS decreases more than the ProjE model. It can be seen from Figure 3(b) that Hits10andFHits10 both show an upward trend with the increase of the embedded size$ k $, and the FRS rises more than the ProjE model. Therefore, we can conclude that as the embedding size increases, the entity prediction performance of the KGE models will increase, but the Mean Rank is more sensitive to this influence factor than Hits@K. Under the same embedding size, the prediction performance of FRS model is better than ProjE. These two models have achieved a sharp change in embedding size 100,200, and 300, and the performance of prediction doesn't improve significantly when the embedding size is more than $ 400 $. It shows that the appropriate embedding size makes the model more expressive; However, when the embedding size of the model increases to a certain threshold, the network will learn some unimportant features or even noise, which will cause negative effects and require more computing resources.

    As can be seen from Figure 4(a), both MR and FMR are on the rise as probability $ p _ {n} $ increases. As can be seen from Figure 4(b), Hits10andFHits10 both show a decreasing trend as probability $ p _ {n} $ increases. When the probability $ p _ {n} $ is increasing, both FRS and ProjE are negatively affected, but FRS is less damaged than ProjE. Under the influence of negative influence, the prediction effect of FRS model is still better than ProjE, which indicates that the model is more robust. From the experimental results in Figure 3 and Figure 4, we can conclude that the effect of the embedding size and the negative sampling probability on the entity prediction results is in reverse. Meanwhile, the influence of the embedding size on the model is greater than the negative sampling probability. This also shows that we should restore the semantic information in the embedding space as much as possible, that is, the sufficient interaction for the entities and relationships. In the experiment of further study, the prediction effect of the FRS model was always better than ProjE, although sometimes the influence of parameters on the experimental results was unfavorable.

    In this paper, a knowledge graphs embedding model, based on the shared parameters, called FRS is proposed for entity prediction tasks. The FRS model innovatively introduces the feature engineering method into the entity prediction tasks, which alleviates the problem of insufficient interaction in the KGE models. In particular, the F network proposed in this paper realizes the alignment of entities and relationships in the same feature space. Experiments have shown that the FRS with the feature processing layer has absolutely good prediction performance compared with the traditional KGE model, and the prediction performance is better under the influence of the positive or negative influence of the hyperparameter. The FRS model does not require pre-training and is a self-sufficient model of length 1, meanwhile, it obtains the best entity prediction results with a simple three-layer network. Although the FRS model uses the idea of shared parameters, there are still a large number of parameters in the training process. The future work is how to alleviate the problem of insufficient interaction while reducing the number of training parameters. Moreover, we will consider the temporal logic relations between components of the triples, such as the application of RNN or LSTM.

    This work was supported in part by the National Natural Science Foundation of China under Grant 61373120. The work of this paper was supported by the Electronic Service Technology and Engineering Lab, Northwestern Polytechnical University, through a Ph.D. Scholarship.

    All authors declare no conflicts of interest in this paper.

    [1] Haynie JL, Bryant PJ (1977) The Effects of X-rays on the Proliferation Dynamics of Cels in the Imaginal Wing Disc of Drosophila melanogaster. Wilhelm Roux's Archives 183: 85-100. doi: 10.1007/BF00848779
    [2] James AA, Bryant PJ (1981) A quantitative study of cell death and mitotic inhibition in gamma-irradiated imaginal wing discs of Drosophila melanogaster. Radiat Res 87: 552-564. doi: 10.2307/3575520
    [3] Milan M, Campuzano S, Garcia-Bellido A (1997) Developmental parameters of cell death in the wing disc of Drosophila. P Natl Acad Sci USA 94: 5691-5696. doi: 10.1073/pnas.94.11.5691
    [4] Meier P, Silke J, Leevers SJ, et al. (2000) The Drosophila caspase DRONC is regulated by DIAP1. EMBO J 19: 598-611. doi: 10.1093/emboj/19.4.598
    [5] Mollereau B, Perez-Garijo A, Bergmann A, et al. (2013) Compensatory proliferation and apoptosis-induced proliferation: a need for clarification. Cell Death Differ 20: 181. doi: 10.1038/cdd.2012.82
    [6] Martin FA, Perez-Garijo A, Morata G (2009) Apoptosis in Drosophila: compensatory proliferation and undead cells. Int J Dev Biol 53: 1341-1347. doi: 10.1387/ijdb.072447fm
    [7] Perez-Garijo A, Martin FA, Morata G (2004) Caspase inhibition during apoptosis causes abnormal signalling and developmental aberrations in Drosophila. Development 131: 5591-5598. doi: 10.1242/dev.01432
    [8] Ryoo HD, Gorenc T, Steller H (2004) Apoptotic cells can induce compensatory cell proliferation through the JNK and the Wingless signaling pathways. Dev Cell 7: 491-501. doi: 10.1016/j.devcel.2004.08.019
    [9] Callus BA, Vaux DL (2007) Caspase inhibitors: viral, cellular and chemical. Cell Death Differ 14: 73-78. doi: 10.1038/sj.cdd.4402034
    [10] Hadley C (2003) What doesn't kill you makes you stronger. A new model for risk assessment may not only revolutionize the field of toxicology, but also have vast implications for risk assessment. EMBO Rep 4: 924-926.
    [11] Miyachi Y (2000) Acute mild hypothermia caused by a low dose of X-irradiation induces a protective effect against mid-lethal doses of X-rays, and a low level concentration of ozone may act as a radiomimetic. Brit J Radiol 73: 298-304. doi: 10.1259/bjr.73.867.10817047
    [12] Kondo S (1988) Altruistic cell suicide in relation to radiation hormesis. Int J Radiat Biol Relat Stud Phys Chem Med 53: 95-102. doi: 10.1080/09553008814550461
    [13] Huh JR, Guo M, Hay BA (2004) Compensatory proliferation induced by cell death in the Drosophila wing disc requires activity of the apical cell death caspase Dronc in a nonapoptotic role. Curr Biol 14: 1262-1266. doi: 10.1016/j.cub.2004.06.015
    [14] McEwen DG, Peifer M (2005) Puckered, a Drosophila MAPK phosphatase, ensures cell viability by antagonizing JNK-induced apoptosis. Development 132: 3935-3946. doi: 10.1242/dev.01949
    [15] Garcia-Bellido A, Ripoll P, Morata G (1973) Developmental compartmentalisation of the wing disk of Drosophila. Nat New Biol 245: 251-253.
    [16] Perez-Garijo A, Shlevkov E, Morata G (2009) The role of Dpp and Wg in compensatory proliferation and in the formation of hyperplastic overgrowths caused by apoptotic cells in the Drosophila wing disc. Development 136: 1169-1177. doi: 10.1242/dev.034017
    [17] Martin-Blanco E, Gampel A, Ring J, et al. (1998) puckered encodes a phosphatase that mediates a feedback loop regulating JNK activity during dorsal closure in Drosophila. Gene Dev 12: 557-570. doi: 10.1101/gad.12.4.557
    [18] Fan Y, Bergmann A (2008) Distinct mechanisms of apoptosis-induced compensatory proliferation in proliferating and differentiating tissues in the Drosophila eye. Dev Cell 14: 399-410. doi: 10.1016/j.devcel.2008.01.003
    [19] Kondo S, Senoo-Matsuda N, Hiromi Y, et al. (2006) DRONC coordinates cell death and compensatory proliferation. Mol Cell Biol 26: 7258-7268. doi: 10.1128/MCB.00183-06
    [20] Wichmann A, Jaklevic B, Su TT (2006) Ionizing radiation induces caspase-dependent but Chk2- and p53-independent cell death in Drosophila melanogaster. P Natl Acad Sci USA 103: 9952-9957. doi: 10.1073/pnas.0510528103
    [21] Wells BS, Johnston LA (2012) Maintenance of imaginal disc plasticity and regenerative potential in Drosophila by p53. Dev Biol 361: 263-276. doi: 10.1016/j.ydbio.2011.10.012
    [22] Wells BS, Yoshida E, Johnston LA (2006) Compensatory proliferation in Drosophila imaginal discs requires Dronc-dependent p53 activity. Curr Biol 16: 1606-1615. doi: 10.1016/j.cub.2006.07.046
    [23] Dichtel-Danjoy ML, Ma D, Dourlen P, et al. (2013) Drosophila p53 isoforms differentially regulate apoptosis and apoptosis-induced proliferation. Cell Death Differ 20: 108-116. doi: 10.1038/cdd.2012.100
    [24] Wylie A, Lu WJ, D'Brot A, et al. (2014) p53 activity is selectively licensed in the Drosophila stem cell compartment. eLife 3: e01530.
    [25] Shlevkov E, Morata G (2012) A dp53/JNK-dependant feedback amplification loop is essential for the apoptotic response to stress in Drosophila. Cell Death Differ 19: 451-460. doi: 10.1038/cdd.2011.113
    [26] Lee TV, Fan Y, Wang S, et al. (2011) Drosophila IAP1-mediated ubiquitylation controls activation of the initiator caspase DRONC independent of protein degradation. PLoS Genet 7: e1002261. doi: 10.1371/journal.pgen.1002261
    [27] Martin FA, Herrera SC, Morata G (2009) Cell competition, growth and size control in the Drosophila wing imaginal disc. Development 136: 3747-3756. doi: 10.1242/dev.038406
    [28] Fan Y, Wang S, Hernandez J, et al. (2014) Genetic models of apoptosis-induced proliferation decipher activation of JNK and identify a requirement of EGFR signaling for tissue regenerative responses in Drosophila. PLoS Genet 10: e1004131. doi: 10.1371/journal.pgen.1004131
    [29] Schubiger M, Sustar A, Schubiger G (2010) Regeneration and transdetermination: the role of wingless and its regulation. Dev Biol 347: 315-324. doi: 10.1016/j.ydbio.2010.08.034
    [30] Smith-Bolton RK, Worley MI, Kanda H, et al. (2009) Regenerative growth in Drosophila imaginal discs is regulated by Wingless and Myc. Dev Cell 16: 797-809. doi: 10.1016/j.devcel.2009.04.015
    [31] Staley BK, Irvine KD (2012) Hippo signaling in Drosophila: recent advances and insights. Dev Dyn 241: 3-15. doi: 10.1002/dvdy.22723
    [32] Yu FX, Guan KL (2013) The Hippo pathway: regulators and regulations. Gene Dev 27: 355-371. doi: 10.1101/gad.210773.112
    [33] Grusche FA, Degoutin JL, Richardson HE, et al. (2011) The Salvador/Warts/Hippo pathway controls regenerative tissue growth in Drosophila melanogaster. Dev Biol 350: 255-266. doi: 10.1016/j.ydbio.2010.11.020
    [34] Sun G, Irvine KD (2011) Regulation of Hippo signaling by Jun kinase signaling during compensatory cell proliferation and regeneration, and in neoplastic tumors. Dev Biol 350: 139-151. doi: 10.1016/j.ydbio.2010.11.036
    [35] Grusche FA, Richardson HE, Harvey KF (2010) Upstream regulation of the hippo size control pathway. Curr Biol 20: R574-582. doi: 10.1016/j.cub.2010.05.023
    [36] Wu M, Pastor-Pareja JC, Xu T (2010) Interaction between Ras(V12) and scribbled clones induces tumour growth and invasion. Nature 463: 545-548. doi: 10.1038/nature08702
    [37] Jezowska B, Fernandez BG, Amandio AR, et al. (2011) A dual function of Drosophila capping protein on DE-cadherin maintains epithelial integrity and prevents JNK-mediated apoptosis. Dev Biol 360: 143-159. doi: 10.1016/j.ydbio.2011.09.016
    [38] Kagey JD, Brown JA, Moberg KH (2012) Regulation of Yorkie activity in Drosophila imaginal discs by the Hedgehog receptor gene patched. Mech Develop 129: 339-349. doi: 10.1016/j.mod.2012.05.007
    [39] Christiansen AE, Ding T, Fan Y, et al. (2013) Non-cell autonomous control of apoptosis by ligand-independent Hedgehog signaling in Drosophila. Cell Death Differ 20: 302-311. doi: 10.1038/cdd.2012.126
    [40] Christiansen AE, Ding T, Bergmann A (2012) Ligand-independent activation of the Hedgehog pathway displays non-cell autonomous proliferation during eye development in Drosophila. Mech Develop 129: 98-108. doi: 10.1016/j.mod.2012.05.009
    [41] Herrera SC, Martin R, Morata G (2013) Tissue homeostasis in the wing disc of Drosophila melanogaster: immediate response to massive damage during development. PLoS Genet 9: e1003446. doi: 10.1371/journal.pgen.1003446
    [42] Bergantinos C, Corominas M, Serras F (2010) Cell death-induced regeneration in wing imaginal discs requires JNK signalling. Development 137: 1169-1179. doi: 10.1242/dev.045559
    [43] Li F, Huang Q, Chen J, et al. (2010) Apoptotic cells activate the ""phoenix rising"" pathway to promote wound healing and tissue regeneration. Sci Signal 3: ra13.
    [44] Li X, Wang Z, Ma Q, et al. (2014) Sonic hedgehog paracrine signaling activates stromal cells to promote perineural invasion in pancreatic cancer. Clin Cancer Res 20: 4326-4338. doi: 10.1158/1078-0432.CCR-13-3426
    [45] Huang Q, Li F, Liu X, et al. (2011) Caspase 3-mediated stimulation of tumor cell repopulation during cancer radiotherapy. Nat Med 17: 860-866. doi: 10.1038/nm.2385
    [46] Sun Y, Campisi J, Higano C, et al. (2012) Treatment-induced damage to the tumor microenvironment promotes prostate cancer therapy resistance through WNT16B. Nat Med 18: 1359-1368. doi: 10.1038/nm.2890
    [47] Perez-Garijo A, Fuchs Y, Steller H (2013) Apoptotic cells can induce non-autonomous apoptosis through the TNF pathway. eLife 2: e01004.
    [48] Bilak A, Uyetake L, Su TT (2014) Dying cells protect survivors from radiation-induced cell death in Drosophila. PLoS Genet 10: e1004220. doi: 10.1371/journal.pgen.1004220
    [49] Brennecke J, Hipfner DR, Stark A, et al. (2003) bantam encodes a developmentally regulated microRNA that controls cell proliferation and regulates the proapoptotic gene hid in Drosophila. Cell 113: 25-36. doi: 10.1016/S0092-8674(03)00231-9
    [50] Mothersill C, Seymour C (2006) Radiation-induced bystander effects: evidence for an adaptive response to low dose exposures? Dose Response 4: 283-290. doi: 10.2203/dose-response.06-111.Mothersill
    [51] Mothersill C, Seymour CB (2006) Radiation-induced bystander effects and the DNA paradigm: an ""out of field"" perspective. Mutat Res 597: 5-10. doi: 10.1016/j.mrfmmm.2005.10.011
    [52] Mothersill C, Stamato TD, Perez ML, et al. (2000) Involvement of energy metabolism in the production of 'bystander effects' by radiation. Brit J Cancer 82: 1740-1746. doi: 10.1054/bjoc.2000.1109
    [53] Singh H, Saroya R, Smith R, et al. (2011) Radiation induced bystander effects in mice given low doses of radiation in vivo. Dose Response 9: 225-242. doi: 10.2203/dose-response.09-062.Singh
    [54] van Deursen JM (2014) The role of senescent cells in ageing. Nature 509: 439-446. doi: 10.1038/nature13193
    [55] Rodgers JT, King KY, Brett JO, et al. (2014) mTORC1 controls the adaptive transition of quiescent stem cells from G0 to G(Alert). Nature 510: 393-396.
    [56] Taylor RC, Berendzen KM, Dillin A (2014) Systemic stress signalling: understanding the cell non-autonomous control of proteostasis. Nat Rev Mol Cell Bio 15: 211-217. doi: 10.1038/nrm3752
  • This article has been cited by:

    1. Milica Dima, Mirela Paraschivu, Elena Partal, Aurelia Diaconu, Reta Drăghici, Irina Titirica, The Impact of the Sowing Time on Peanuts Yield`s Components in Marginal Sandy Soils in Southern Oltenia, Romania, 2023, 40, 12224227, 307, 10.59665/rar4029
  • Reader Comments
  • © 2015 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(11418) PDF downloads(1412) Cited by(7)

Figures and Tables

Figures(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog