818 resultados para competence network model
Resumo:
For self-pollinating plants to reproduce, male and female organ development must be coordinated as flowers mature. The Arabidopsis transcription factors AUXIN RESPONSE FACTOR 6 (ARF6) and ARF8 regulate this complex process by promoting petal expansion, stamen filament elongation, anther dehiscence, and gynoecium maturation, thereby ensuring that pollen released from the anthers is deposited on the stigma of a receptive gynoecium. ARF6 and ARF8 induce jasmonate production, which in turn triggers expression of MYB21 and MYB24, encoding R2R3 MYB transcription factors that promote petal and stamen growth. To understand the dynamics of this flower maturation regulatory network, we have characterized morphological, chemical, and global gene expression phenotypes of arf, myb, and jasmonate pathway mutant flowers. We found that MYB21 and MYB24 promoted not only petal and stamen development but also gynoecium growth. As well as regulating reproductive competence, both the ARF and MYB factors promoted nectary development or function and volatile sesquiterpene production, which may attract insect pollinators and/or repel pathogens. Mutants lacking jasmonate synthesis or response had decreased MYB21 expression and stamen and petal growth at the stage when flowers normally open, but had increased MYB21 expression in petals of older flowers, resulting in renewed and persistent petal expansion at later stages. Both auxin response and jasmonate synthesis promoted positive feedbacks that may ensure rapid petal and stamen growth as flowers open. MYB21 also fed back negatively on expression of jasmonate biosynthesis pathway genes to decrease flower jasmonate level, which correlated with termination of growth after flowers have opened. These dynamic feedbacks may promote timely, coordinated, and transient growth of flower organs.
Resumo:
El treball que presentem a continuació desenvolupa un marc teòric i pràctic per a l'avaluació i estudi d'un model generatiu aplicat a tasques discriminatives de senyals sonores sense component harmònica. El model generatiu està basat en la construcció de l'anomenada deep belief network, un tipus de xarxa neuronal generativa que permet realitzar tasques de classificació i regressió com també de reconstrucció dels seus estats interns.A partir de l'anàlisi realitzada hem pogut obtenir resultats en classificació aparellats amb els resultats de l'estat de l'art de classificadors de sons inharmònics. Tot i no establir una clara superioritat envers altres mètodes, el present treball ha permés desenvolupar una anàlisi per almodel avaluat amb moltes possibilitats de millora en un futur per altres treballs. Al llarg del treball es demostra la seva eficàcia en tasques discriminatives, com també la capacitat de reduir la dimensionalitat de les dades d'entrada al model i les possibilitats de reconstruir els seus estats interns per a obtenir unes sortides de dades de la xarxa similars a les entrades de descriptors.El desenvolupament centrat en la deep belief network ens ha permés construir un entorn unificat d'avaluació de diferents mètodes d'aprenentatge, construcció i adequació de diferents descriptors sonors i una posterior visualització d'estats interns del mateix, que han possibilitat una avaluaciócomparativa i unificada respecte altres mètodes classificadors de l'estat de l'art. També ens ha permés desenvolupar una implementació en un llenguatge d'alt nivell, que ha reportat més significància per a l'enteniment i anàlisi del model avaluat, amb una argumentació més sòlida.Els resultats i l'anàlisi que reportem són significatius i positius per al model avaluat, i degut a la poca literatura existent en el camp de classificació de sons inharmònics com els sons percussius,creiem que és una aportació interessant i significativa per al camp en el que s'engloba el treball.
Resumo:
This paper deals with the problem of spatial data mapping. A new method based on wavelet interpolation and geostatistical prediction (kriging) is proposed. The method - wavelet analysis residual kriging (WARK) - is developed in order to assess the problems rising for highly variable data in presence of spatial trends. In these cases stationary prediction models have very limited application. Wavelet analysis is used to model large-scale structures and kriging of the remaining residuals focuses on small-scale peculiarities. WARK is able to model spatial pattern which features multiscale structure. In the present work WARK is applied to the rainfall data and the results of validation are compared with the ones obtained from neural network residual kriging (NNRK). NNRK is also a residual-based method, which uses artificial neural network to model large-scale non-linear trends. The comparison of the results demonstrates the high quality performance of WARK in predicting hot spots, reproducing global statistical characteristics of the distribution and spatial correlation structure.
Resumo:
Abstract Sitting between your past and your future doesn't mean you are in the present. Dakota Skye Complex systems science is an interdisciplinary field grouping under the same umbrella dynamical phenomena from social, natural or mathematical sciences. The emergence of a higher order organization or behavior, transcending that expected of the linear addition of the parts, is a key factor shared by all these systems. Most complex systems can be modeled as networks that represent the interactions amongst the system's components. In addition to the actual nature of the part's interactions, the intrinsic topological structure of underlying network is believed to play a crucial role in the remarkable emergent behaviors exhibited by the systems. Moreover, the topology is also a key a factor to explain the extraordinary flexibility and resilience to perturbations when applied to transmission and diffusion phenomena. In this work, we study the effect of different network structures on the performance and on the fault tolerance of systems in two different contexts. In the first part, we study cellular automata, which are a simple paradigm for distributed computation. Cellular automata are made of basic Boolean computational units, the cells; relying on simple rules and information from- the surrounding cells to perform a global task. The limited visibility of the cells can be modeled as a network, where interactions amongst cells are governed by an underlying structure, usually a regular one. In order to increase the performance of cellular automata, we chose to change its topology. We applied computational principles inspired by Darwinian evolution, called evolutionary algorithms, to alter the system's topological structure starting from either a regular or a random one. The outcome is remarkable, as the resulting topologies find themselves sharing properties of both regular and random network, and display similitudes Watts-Strogtz's small-world network found in social systems. Moreover, the performance and tolerance to probabilistic faults of our small-world like cellular automata surpasses that of regular ones. In the second part, we use the context of biological genetic regulatory networks and, in particular, Kauffman's random Boolean networks model. In some ways, this model is close to cellular automata, although is not expected to perform any task. Instead, it simulates the time-evolution of genetic regulation within living organisms under strict conditions. The original model, though very attractive by it's simplicity, suffered from important shortcomings unveiled by the recent advances in genetics and biology. We propose to use these new discoveries to improve the original model. Firstly, we have used artificial topologies believed to be closer to that of gene regulatory networks. We have also studied actual biological organisms, and used parts of their genetic regulatory networks in our models. Secondly, we have addressed the improbable full synchronicity of the event taking place on. Boolean networks and proposed a more biologically plausible cascading scheme. Finally, we tackled the actual Boolean functions of the model, i.e. the specifics of how genes activate according to the activity of upstream genes, and presented a new update function that takes into account the actual promoting and repressing effects of one gene on another. Our improved models demonstrate the expected, biologically sound, behavior of previous GRN model, yet with superior resistance to perturbations. We believe they are one step closer to the biological reality.
Resumo:
Objective: To build a theoretical model to configure the network social support experience of people involved in home care. Method: A quantitative approach research, utilizing the Grounded Theory method. The simultaneous data collection and analysis allowed the interpretation of the phenomenon meaning The network social support of people involved in home care. Results: The population passive posture in building their well-being was highlighted. The need of a shared responsibility between the involved parts, population and State is recognized. Conclusion: It is suggested for nurses to be stimulated to amplify home care to attend the demands of caregivers; and to elaborate new studies with different populations, to validate or complement the proposed theoretical model.
Resumo:
Models incorporating more realistic models of customer behavior, as customers choosing froman offer set, have recently become popular in assortment optimization and revenue management.The dynamic program for these models is intractable and approximated by a deterministiclinear program called the CDLP which has an exponential number of columns. However, whenthe segment consideration sets overlap, the CDLP is difficult to solve. Column generationhas been proposed but finding an entering column has been shown to be NP-hard. In thispaper we propose a new approach called SDCP to solving CDLP based on segments and theirconsideration sets. SDCP is a relaxation of CDLP and hence forms a looser upper bound onthe dynamic program but coincides with CDLP for the case of non-overlapping segments. Ifthe number of elements in a consideration set for a segment is not very large (SDCP) can beapplied to any discrete-choice model of consumer behavior. We tighten the SDCP bound by(i) simulations, called the randomized concave programming (RCP) method, and (ii) by addingcuts to a recent compact formulation of the problem for a latent multinomial-choice model ofdemand (SBLP+). This latter approach turns out to be very effective, essentially obtainingCDLP value, and excellent revenue performance in simulations, even for overlapping segments.By formulating the problem as a separation problem, we give insight into why CDLP is easyfor the MNL with non-overlapping considerations sets and why generalizations of MNL posedifficulties. We perform numerical simulations to determine the revenue performance of all themethods on reference data sets in the literature.
Resumo:
The choice network revenue management model incorporates customer purchase behavioras a function of the offered products, and is the appropriate model for airline and hotel networkrevenue management, dynamic sales of bundles, and dynamic assortment optimization.The optimization problem is a stochastic dynamic program and is intractable. A certainty-equivalencerelaxation of the dynamic program, called the choice deterministic linear program(CDLP) is usually used to generate dyamic controls. Recently, a compact linear programmingformulation of this linear program was given for the multi-segment multinomial-logit (MNL)model of customer choice with non-overlapping consideration sets. Our objective is to obtaina tighter bound than this formulation while retaining the appealing properties of a compactlinear programming representation. To this end, it is natural to consider the affine relaxationof the dynamic program. We first show that the affine relaxation is NP-complete even for asingle-segment MNL model. Nevertheless, by analyzing the affine relaxation we derive a newcompact linear program that approximates the dynamic programming value function betterthan CDLP, provably between the CDLP value and the affine relaxation, and often comingclose to the latter in our numerical experiments. When the segment consideration sets overlap,we show that some strong equalities called product cuts developed for the CDLP remain validfor our new formulation. Finally we perform extensive numerical comparisons on the variousbounds to evaluate their performance.
Resumo:
This paper analyzes the flow of intermediate inputs across sectors by adopting a network perspective on sectoral interactions. I apply these tools to show how fluctuationsin aggregate economic activity can be obtained from independent shocks to individualsectors. First, I characterize the network structure of input trade in the U.S. On thedemand side, a typical sector relies on a small number of key inputs and sectors arehomogeneous in this respect. However, in their role as input-suppliers sectors do differ:many specialized input suppliers coexist alongside general purpose sectors functioningas hubs to the economy. I then develop a model of intersectoral linkages that can reproduce these connectivity features. In a standard multisector setup, I use this modelto provide analytical expressions linking aggregate volatility to the network structureof input trade. I show that the presence of sectoral hubs - by coupling productiondecisions across sectors - leads to fluctuations in aggregates.
Resumo:
In this paper a p--median--like model is formulated to address theissue of locating new facilities when there is uncertainty. Severalpossible future scenarios with respect to demand and/or the travel times/distanceparameters are presented. The planner will want a strategy of positioning thatwill do as ``well as possible'' over the future scenarios. This paper presents a discrete location model formulation to address this P--Medianproblem under uncertainty. The model is applied to the location of firestations in Barcelona.
Resumo:
We argue the importance both of developing simple sufficientconditions for the stability of general multiclass queueing networks and also of assessing such conditions under a range of assumptions on the weight of the traffic flowing between service stations. To achieve the former, we review a peak-rate stability condition and extend its range of application and for the latter, we introduce a generalisation of the Lu-Kumar network on which the stability condition may be tested for a range of traffic configurations. The peak-rate condition is close to exact when the between-station traffic is light, but degrades as this traffic increases.
Resumo:
The Network Revenue Management problem can be formulated as a stochastic dynamic programming problem (DP or the\optimal" solution V *) whose exact solution is computationally intractable. Consequently, a number of heuristics have been proposed in the literature, the most popular of which are the deterministic linear programming (DLP) model, and a simulation based method, the randomized linear programming (RLP) model. Both methods give upper bounds on the optimal solution value (DLP and PHLP respectively). These bounds are used to provide control values that can be used in practice to make accept/deny decisions for booking requests. Recently Adelman [1] and Topaloglu [18] have proposed alternate upper bounds, the affine relaxation (AR) bound and the Lagrangian relaxation (LR) bound respectively, and showed that their bounds are tighter than the DLP bound. Tight bounds are of great interest as it appears from empirical studies and practical experience that models that give tighter bounds also lead to better controls (better in the sense that they lead to more revenue). In this paper we give tightened versions of three bounds, calling themsAR (strong Affine Relaxation), sLR (strong Lagrangian Relaxation) and sPHLP (strong Perfect Hindsight LP), and show relations between them. Speciffically, we show that the sPHLP bound is tighter than sLR bound and sAR bound is tighter than the LR bound. The techniques for deriving the sLR and sPHLP bounds can potentially be applied to other instances of weakly-coupled dynamic programming.
Resumo:
Hydrological models developed for extreme precipitation of PMP type are difficult to calibrate because of the scarcity of available data for these events. This article presents the process and results of calibration for a distributed hydrological model at fine scale developed for the estimation of probable maximal floods in the case of a PMP. This calibration is done on two Swiss catchments for two events of summer storms. The calculation done is concentrated on the estimation of the parameters of the model, divided in two parts. The first is necessary for the computation of flow speeds while the second is required for the determination of the initial and final infiltration capacities for each terrain type. The results, validated with the Nash equation show a good correlation between the simulated and observed flows. We also apply this model on two Romanian catchments, showing the river network and estimated flow.
Resumo:
The objective of this paper is to compare the performance of twopredictive radiological models, logistic regression (LR) and neural network (NN), with five different resampling methods. One hundred and sixty-seven patients with proven calvarial lesions as the only known disease were enrolled. Clinical and CT data were used for LR and NN models. Both models were developed with cross validation, leave-one-out and three different bootstrap algorithms. The final results of each model were compared with error rate and the area under receiver operating characteristic curves (Az). The neural network obtained statistically higher Az than LR with cross validation. The remaining resampling validation methods did not reveal statistically significant differences between LR and NN rules. The neural network classifier performs better than the one based on logistic regression. This advantage is well detected by three-fold cross-validation, but remains unnoticed when leave-one-out or bootstrap algorithms are used.
Resumo:
In a previous paper a novel Generalized Multiobjective Multitree model (GMM-model) was proposed. This model considers for the first time multitree-multicast load balancing with splitting in a multiobjective context, whose mathematical solution is a whole Pareto optimal set that can include several results than it has been possible to find in the publications surveyed. To solve the GMM-model, in this paper a multi-objective evolutionary algorithm (MOEA) inspired by the Strength Pareto Evolutionary Algorithm (SPEA) is proposed. Experimental results considering up to 11 different objectives are presented for the well-known NSF network, with two simultaneous data flows
Resumo:
Purpose/Objective(s): Mammary adenoid cystic carcinoma (ACC) is a rare breast cancer variant. It accounts for less than 0.1% of all invasive breast malignancies. Typically, it presents as a small breast lump with a low propensity to metastasize to regional lymph nodes or distant sites. The aim of this retrospective multicenter Rare Cancer Network study is to assess prognostic factors and patterns of failure in ACC, as well as the role of radiation therapy (RT) in this rare disease. Materials/Methods: Between January 1980 and December 2007, 61 women with breast ACC were included in this study. Median age was 59 years (range, 28-94 years). The majority of the patients had good performance status (49 patients with WHO 0, 12 patients with WHO 1), and 70% of the patients (n = 42) were premenopausal. Surgery consisted of tumorectomy in 35 patients, mastectomy in 20, or quadrantectomy in 6. Median tumor size was 20 mm (range, 6-170 mm). Surgical margins were clear in 50 (82%) patients. Axillary dissection (n = 41) or sentinel node assessment (n = 10) was realized in the majority of the patients. There were 53 (87%) pN0 and 8 pNx (13%) patients. Estrogen (ER) and progesterone receptor (PR) was negative in 43 (71%) and 42 (69%) patients, respectively. In 16 patients (26%), the receptor status was unknown. Adjuvant chemotherapy or hormonotherapy was administered in 8 (13%) and 7 (12%) patients, respectively. Postoperative RT with a median total dose of 50 Gy (1.8-2.0 Gy/fraction; range, 44-70 Gy) was given in 40 patients. Results: With a median follow-up of 79 months (range, 6-285 months), 5-year overall and disease-free survival (DFS) rates were 94% (95% confidence interval [CI]: 88-100%) and 82% (95% CI: 71-93%), respectively. Five-year locoregional control rate was 95% (95% CI: 89-100%). There were only 4 patients with local relapse who were all salvaged successfully, and 4 other patients developed distant metastases. According to the Common Terminology Criteria for Adverse Events v3.0, late toxicity consisted of grade 2-3 cutaneous fibrosis in 4 (10%) patients, grade 1-2 edema in 2 (5%), and grade 3 lung fibrosis in 2 (5%). In univariate analyses, the outcome was influenced neither by the type of surgery nor the use of postoperative RT. However, positive receptor status had a negative influence on the outcome. Multivariate analysis (Cox model) revealed that negative ER (p = 0.006) or PR (p = 0.04) status was associated with improved DFS. Conclusions: ACC of the breast is a relatively indolent disease with excellent local control and survival. The prognosis of patients with ACC is much better than that for patients with other breast cancers, especially those who are ER and PR negative. The role of postoperative RT is not clear. More aggressive treatments may be warranted for patients with positive receptor status.