932 resultados para Domain specific modeling
Resumo:
In this paper,the Prony's method is applied to the time-domain waveform data modelling in the presence of noise.The following three problems encountered in this work are studied:(1)determination of the order of waveform;(2)de-termination of numbers of multiple roots;(3)determination of the residues.The methods of solving these problems are given and simulated on the computer.Finally,an output pulse of model PG-10N signal generator and the distorted waveform obtained by transmitting the pulse above mentioned through a piece of coaxial cable are modelled,and satisfactory results are obtained.So the effectiveness of Prony's method in waveform data modelling in the presence of noise is confirmed.
Resumo:
Coq10p is a protein required for coenzyme Q function, but its specific role is still unknown. It is a member of the START domain superfamily that contains a hydrophobic tunnel implicated in the binding of lipophilic molecules. We used site-directed mutagenesis, statistical coupling analysis and molecular modeling to probe structural determinants in the Coq10p putative tunnel. Four point mutations were generated (coq10-K50E, coq10-L96S, coq10-E105K and coq10-K162D) and their biochemical properties analysed, as well as structural consequences. Our results show that all mutations impaired Coq10p function and together with molecular modeling indicate an important role for the Coq10p putative tunnel. (C) 2010 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.
Resumo:
A per capita model for the growth, development and reproduction of the coccinellid predator Rhizobius lophanthae (Blaisd) feeding on the oleander scale (Aspidiotus nerii Bouche (Homoptera: Diaspididae)) was developed. A thermal threshold for development of 9.4 degrees C was found. Under conditions of unlimited food, the relationship of mass at time t+1 to that at t (in days at 25 degrees C) suggests an 8.7 percent growth rate per mg larvae per day at 25 degrees C. An adult female beetle produces approximately 20 eggs per day while consuming an average of 8.5 scales/day. This is approximately 2.16 eggs per scale consumed above the maintenance level of 1.88 scales per day. More precisely, this compensation point is 0.12 mg of prey/mg of predator/day at 25 degrees C and the egestion rate is 1 - beta = 0.63.
Resumo:
The accurate determination of thermophysical properties of milk is very important for design, simulation, optimization, and control of food processing such as evaporation, heat exchanging, spray drying, and so forth. Generally, polynomial methods are used for prediction of these properties based on empirical correlation to experimental data. Artificial neural networks are better Suited for processing noisy and extensive knowledge indexing. This article proposed the application of neural networks for prediction of specific heat, thermal conductivity, and density of milk with temperature ranged from 2.0 to 71.0degreesC, 72.0 to 92.0% of water content (w/w), and 1.350 to 7.822% of fat content (w/w). Artificial neural networks presented a better prediction capability of specific heat, thermal conductivity, and density of milk than polynomial modeling. It showed a reasonable alternative to empirical modeling for thermophysical properties of foods.
Resumo:
This paper describes a computational model based on lumped elements for the mutual coupling between phases in three-phase transmission lines without the explicit use of modal transformation matrices. The self and mutual parameters and the coupling between phases are modeled using modal transformation techniques. The modal representation is developed from the intrinsic consideration of the modal transformation matrix and the resulting system of time-domain differential equations is described as state equations. Thus, a detailed profile of the currents and the voltages through the line can be easily calculated using numerical or analytical integration methods. However, the original contribution of the article is the proposal of a time-domain model without the successive phase/mode transformations and a practical implementation based on conventional electrical circuits, without the use of electromagnetic theory to model the coupling between phases. © 2011 IEEE.
Resumo:
A method is presented for estimating age-specific mortality based on minimal information: a model life table and an estimate of longevity. This approach uses expected patterns of mammalian survivorship to define a general model of age-specific mortality rates. One such model life table is based on data for northern fur seals (Callorhinus ursinus) using Siler’s (1979) 5-parameter competing risk model. Alternative model life tables are based on historical data for human females and on a published model for Old World monkeys. Survival rates for a marine mammal species are then calculated by scaling these models by the longevity of that species. By using a realistic model (instead of assuming constant mortality), one can see more easily the real biological limits to population growth. The mortality estimation procedure is illustrated with examples of spotted dolphins (Stenella attenuata) and harbor porpoise (Phocoena phocoena).
Resumo:
Prediction of radiated fields from transmission lines has not previously been studied from a panoptical power system perspective. The application of BPL technologies to overhead transmission lines would benefit greatly from an ability to simulate real power system environments, not limited to the transmission lines themselves. Presently circuitbased transmission line models used by EMTP-type programs utilize Carson’s formula for a waveguide parallel to an interface. This formula is not valid for calculations at high frequencies, considering effects of earth return currents. This thesis explains the challenges of developing such improved models, explores an approach to combining circuit-based and electromagnetics modeling to predict radiated fields from transmission lines, exposes inadequacies of simulation tools, and suggests methods of extending the validity of transmission line models into very high frequency ranges. Electromagnetics programs are commonly used to study radiated fields from transmission lines. However, an approach is proposed here which is also able to incorporate the components of a power system through the combined use of EMTP-type models. Carson’s formulas address the series impedance of electrical conductors above and parallel to the earth. These equations have been analyzed to show their inherent assumptions and what the implications are. Additionally, the lack of validity into higher frequencies has been demonstrated, showing the need to replace Carson’s formulas for these types of studies. This body of work leads to several conclusions about the relatively new study of BPL. Foremost, there is a gap in modeling capabilities which has been bridged through integration of circuit-based and electromagnetics modeling, allowing more realistic prediction of BPL performance and radiated fields. The proposed approach is limited in its scope of validity due to the formulas used by EMTP-type software. To extend the range of validity, a new set of equations must be identified and implemented in the approach. Several potential methods of implementation have been explored. Though an appropriate set of equations has not yet been identified, further research in this area will benefit from a clear depiction of the next important steps and how they can be accomplished. Prediction of radiated fields from transmission lines has not previously been studied from a panoptical power system perspective. The application of BPL technologies to overhead transmission lines would benefit greatly from an ability to simulate real power system environments, not limited to the transmission lines themselves. Presently circuitbased transmission line models used by EMTP-type programs utilize Carson’s formula for a waveguide parallel to an interface. This formula is not valid for calculations at high frequencies, considering effects of earth return currents. This thesis explains the challenges of developing such improved models, explores an approach to combining circuit-based and electromagnetics modeling to predict radiated fields from transmission lines, exposes inadequacies of simulation tools, and suggests methods of extending the validity of transmission line models into very high frequency ranges. Electromagnetics programs are commonly used to study radiated fields from transmission lines. However, an approach is proposed here which is also able to incorporate the components of a power system through the combined use of EMTP-type models. Carson’s formulas address the series impedance of electrical conductors above and parallel to the earth. These equations have been analyzed to show their inherent assumptions and what the implications are. Additionally, the lack of validity into higher frequencies has been demonstrated, showing the need to replace Carson’s formulas for these types of studies. This body of work leads to several conclusions about the relatively new study of BPL. Foremost, there is a gap in modeling capabilities which has been bridged through integration of circuit-based and electromagnetics modeling, allowing more realistic prediction of BPL performance and radiated fields. The proposed approach is limited in its scope of validity due to the formulas used by EMTP-type software. To extend the range of validity, a new set of equations must be identified and implemented in the approach. Several potential methods of implementation have been explored. Though an appropriate set of equations has not yet been identified, further research in this area will benefit from a clear depiction of the next important steps and how they can be accomplished.
Resumo:
Wood formation is an economically and environmentally important process and has played a significant role in the evolution of terrestrial plants. Despite its significance, the molecular underpinnings of the process are still poorly understood. We have previously shown that four Lateral Boundary Domain (LBD) transcription factors have important roles in the regulation of wood formation with two (LBD1 and LBD4) involved in secondary phloem and ray cell development and two (LBD15 and LBD18) in secondary xylem formation. Here, we used comparative phylogenetic analyses to test potential roles of the four LBD genes in the evolution of woodiness. We studied the copy number and variation in DNA and amino acid sequences of the four LBDs in a wide range of woody and herbaceous plant taxa with fully sequenced and annotated genomes. LBD1 showed the highest gene copy number across the studied species, and LBD1 gene copy number was strongly and significantly correlated with the level of ray seriation. The lianas, cucumber and grape, with multiseriate ray cells showed the highest gene copy number (12 and 11, respectively). Because lianas’ growth habit requires significant twisting and bending, the less lignified ray parenchyma cells likely facilitate stem flexibility and maintenance of xylem conductivity. We further demonstrate conservation of amino acids in the LBD18 protein sequences that are specific to woody taxa. Neutrality tests showed evidence for strong purifying selection on these gene regions across various orders, indicating adaptive convergent evolution of LBD18. Structural modeling demonstrates that the conserved amino acids have a significant impact on the tertiary protein structure and thus are likely of significant functional importance.
Resumo:
By means of fixed-links modeling, the present study identified different processes of visual short-term memory (VSTM) functioning and investigated how these processes are related to intelligence. We conducted an experiment where the participants were presented with a color change detection task. Task complexity was manipulated through varying the number of presented stimuli (set size). We collected hit rate and reaction time (RT) as indicators for the amount of information retained in VSTM and speed of VSTM scanning, respectively. Due to the impurity of these measures, however, the variability in hit rate and RT was assumed to consist not only of genuine variance due to individual differences in VSTM retention and VSTM scanning but also of other, non-experimental portions of variance. Therefore, we identified two qualitatively different types of components for both hit rate and RT: (1) non-experimental components representing processes that remained constant irrespective of set size and (2) experimental components reflecting processes that increased as a function of set size. For RT, intelligence was negatively associated with the non-experimental components, but was unrelated to the experimental components assumed to represent variability in VSTM scanning speed. This finding indicates that individual differences in basic processing speed, rather than in speed of VSTM scanning, differentiates between high- and low-intelligent individuals. For hit rate, the experimental component constituting individual differences in VSTM retention was positively related to intelligence. The non-experimental components of hit rate, representing variability in basal processes, however, were not associated with intelligence. By decomposing VSTM functioning into non-experimental and experimental components, significant associations with intelligence were revealed that otherwise might have been obscured.
Resumo:
Calmodulin (CaM) is a ubiquitous Ca(2+) buffer and second messenger that affects cellular function as diverse as cardiac excitability, synaptic plasticity, and gene transcription. In CA1 pyramidal neurons, CaM regulates two opposing Ca(2+)-dependent processes that underlie memory formation: long-term potentiation (LTP) and long-term depression (LTD). Induction of LTP and LTD require activation of Ca(2+)-CaM-dependent enzymes: Ca(2+)/CaM-dependent kinase II (CaMKII) and calcineurin, respectively. Yet, it remains unclear as to how Ca(2+) and CaM produce these two opposing effects, LTP and LTD. CaM binds 4 Ca(2+) ions: two in its N-terminal lobe and two in its C-terminal lobe. Experimental studies have shown that the N- and C-terminal lobes of CaM have different binding kinetics toward Ca(2+) and its downstream targets. This may suggest that each lobe of CaM differentially responds to Ca(2+) signal patterns. Here, we use a novel event-driven particle-based Monte Carlo simulation and statistical point pattern analysis to explore the spatial and temporal dynamics of lobe-specific Ca(2+)-CaM interaction at the single molecule level. We show that the N-lobe of CaM, but not the C-lobe, exhibits a nano-scale domain of activation that is highly sensitive to the location of Ca(2+) channels, and to the microscopic injection rate of Ca(2+) ions. We also demonstrate that Ca(2+) saturation takes place via two different pathways depending on the Ca(2+) injection rate, one dominated by the N-terminal lobe, and the other one by the C-terminal lobe. Taken together, these results suggest that the two lobes of CaM function as distinct Ca(2+) sensors that can differentially transduce Ca(2+) influx to downstream targets. We discuss a possible role of the N-terminal lobe-specific Ca(2+)-CaM nano-domain in CaMKII activation required for the induction of synaptic plasticity.
Resumo:
BACKGROUND Sodium channel NaV1.5 underlies cardiac excitability and conduction. The last 3 residues of NaV1.5 (Ser-Ile-Val) constitute a PDZ domain-binding motif that interacts with PDZ proteins such as syntrophins and SAP97 at different locations within the cardiomyocyte, thus defining distinct pools of NaV1.5 multiprotein complexes. Here, we explored the in vivo and clinical impact of this motif through characterization of mutant mice and genetic screening of patients. METHODS AND RESULTS To investigate in vivo the regulatory role of this motif, we generated knock-in mice lacking the SIV domain (ΔSIV). ΔSIV mice displayed reduced NaV1.5 expression and sodium current (INa), specifically at the lateral myocyte membrane, whereas NaV1.5 expression and INa at the intercalated disks were unaffected. Optical mapping of ΔSIV hearts revealed that ventricular conduction velocity was preferentially decreased in the transversal direction to myocardial fiber orientation, leading to increased anisotropy of ventricular conduction. Internalization of wild-type and ΔSIV channels was unchanged in HEK293 cells. However, the proteasome inhibitor MG132 rescued ΔSIV INa, suggesting that the SIV motif is important for regulation of NaV1.5 degradation. A missense mutation within the SIV motif (p.V2016M) was identified in a patient with Brugada syndrome. The mutation decreased NaV1.5 cell surface expression and INa when expressed in HEK293 cells. CONCLUSIONS Our results demonstrate the in vivo significance of the PDZ domain-binding motif in the correct expression of NaV1.5 at the lateral cardiomyocyte membrane and underline the functional role of lateral NaV1.5 in ventricular conduction. Furthermore, we reveal a clinical relevance of the SIV motif in cardiac disease.
Resumo:
In arid countries worldwide, social conflicts between irrigation-based human development and the conservation of aquatic ecosystems are widespread and attract many public debates. This research focuses on the analysis of water and agricultural policies aimed at conserving groundwater resources and maintaining rurallivelihoods in a basin in Spain's central arid region. Intensive groundwater mining for irrigation has caused overexploitation of the basin's large aquifer, the degradation of reputed wetlands and has given rise to notable social conflicts over the years. With the aim of tackling the multifaceted socio-ecological interactions of complex water systems, the methodology used in this study consists in a novel integration into a common platform of an economic optimization model and a hydrology model WEAP (Water Evaluation And Planning system). This robust tool is used to analyze the spatial and temporal effects of different water and agricultural policies under different climate scenarios. It permits the prediction of different climate and policy outcomes across farm types (water stress impacts and adaptation), at basin's level (aquifer recovery), and along the policies’ implementation horizon (short and long run). Results show that the region's current quota-based water policies may contribute to reduce water consumption in the farms but will not be able to recover the aquifer and will inflict income losses to the rural communities. This situation would worsen in case of drought. Economies of scale and technology are evidenced as larger farms with cropping diversification and those equipped with modern irrigation will better adapt to water stress conditions. However, the long-term sustainability of the aquifer and the maintenance of rurallivelihoods will be attained only if additional policy measures are put in place such as the control of illegal abstractions and the establishing of a water bank. Within the policy domain, the research contributes to the new sustainable development strategy of the EU by concluding that, in water-scarce regions, effective integration of water and agricultural policies is essential for achieving the water protection objectives of the EU policies. Therefore, the design and enforcement of well-balanced region-specific polices is a major task faced by policy makers for achieving successful water management that will ensure nature protection and human development at tolerable social costs. From a methodological perspective, this research initiative contributes to better address hydrological questions as well as economic and social issues in complex water and human systems. Its integrated vision provides a valuable illustration to inform water policy and management decisions within contexts of water-related conflicts worldwide.
Resumo:
Energy management has always been recognized as a challenge in mobile systems, especially in modern OS-based mobile systems where multi-functioning are widely supported. Nowadays, it is common for a mobile system user to run multiple applications simultaneously while having a target battery lifetime in mind for a specific application. Traditional OS-level power management (PM) policies make their best effort to save energy under performance constraint, but fail to guarantee a target lifetime, leaving the painful trading off between the total performance of applications and the target lifetime to the user itself. This thesis provides a new way to deal with the problem. It is advocated that a strong energy-aware PM scheme should first guarantee a user-specified battery lifetime to a target application by restricting the average power of those less important applications, and in addition to that, maximize the total performance of applications without harming the lifetime guarantee. As a support, energy, instead of CPU or transmission bandwidth, should be globally managed as the first-class resource by the OS. As the first-stage work of a complete PM scheme, this thesis presents the energy-based fair queuing scheduling, a novel class of energy-aware scheduling algorithms which, in combination with a mechanism of battery discharge rate restricting, systematically manage energy as the first-class resource with the objective of guaranteeing a user-specified battery lifetime for a target application in OS-based mobile systems. Energy-based fair queuing is a cross-application of the traditional fair queuing in the energy management domain. It assigns a power share to each task, and manages energy by proportionally serving energy to tasks according to their assigned power shares. The proportional energy use establishes proportional share of the system power among tasks, which guarantees a minimum power for each task and thus, avoids energy starvation on any task. Energy-based fair queuing treats all tasks equally as one type and supports periodical time-sensitive tasks by allocating each of them a share of system power that is adequate to meet the highest energy demand in all periods. However, an overly conservative power share is usually required to guarantee the meeting of all time constraints. To provide more effective and flexible support for various types of time-sensitive tasks in general purpose operating systems, an extra real-time friendly mechanism is introduced to combine priority-based scheduling into the energy-based fair queuing. Since a method is available to control the maximum time one time-sensitive task can run with priority, the power control and time-constraint meeting can be flexibly traded off. A SystemC-based test-bench is designed to assess the algorithms. Simulation results show the success of the energy-based fair queuing in achieving proportional energy use, time-constraint meeting, and a proper trading off between them. La gestión de energía en los sistema móviles está considerada hoy en día como un reto fundamental, notándose, especialmente, en aquellos terminales que utilizando un sistema operativo implementan múltiples funciones. Es común en los sistemas móviles actuales ejecutar simultaneamente diferentes aplicaciones y tener, para una de ellas, un objetivo de tiempo de uso de la batería. Tradicionalmente, las políticas de gestión de consumo de potencia de los sistemas operativos hacen lo que está en sus manos para ahorrar energía y satisfacer sus requisitos de prestaciones, pero no son capaces de proporcionar un objetivo de tiempo de utilización del sistema, dejando al usuario la difícil tarea de buscar un compromiso entre prestaciones y tiempo de utilización del sistema. Esta tesis, como contribución, proporciona una nueva manera de afrontar el problema. En ella se establece que un esquema de gestión de consumo de energía debería, en primer lugar, garantizar, para una aplicación dada, un tiempo mínimo de utilización de la batería que estuviera especificado por el usuario, restringiendo la potencia media consumida por las aplicaciones que se puedan considerar menos importantes y, en segundo lugar, maximizar las prestaciones globales sin comprometer la garantía de utilización de la batería. Como soporte de lo anterior, la energía, en lugar del tiempo de CPU o el ancho de banda, debería gestionarse globalmente por el sistema operativo como recurso de primera clase. Como primera fase en el desarrollo completo de un esquema de gestión de consumo, esta tesis presenta un algoritmo de planificación de encolado equitativo (fair queueing) basado en el consumo de energía, es decir, una nueva clase de algoritmos de planificación que, en combinación con mecanismos que restrinjan la tasa de descarga de una batería, gestionen de forma sistemática la energía como recurso de primera clase, con el objetivo de garantizar, para una aplicación dada, un tiempo de uso de la batería, definido por el usuario, en sistemas móviles empotrados. El encolado equitativo de energía es una extensión al dominio de la energía del encolado equitativo tradicional. Esta clase de algoritmos asigna una reserva de potencia a cada tarea y gestiona la energía sirviéndola de manera proporcional a su reserva. Este uso proporcional de la energía garantiza que cada tarea reciba una porción de potencia y evita que haya tareas que se vean privadas de recibir energía por otras con un comportamiento más ambicioso. Esta clase de algoritmos trata a todas las tareas por igual y puede planificar tareas periódicas en tiempo real asignando a cada una de ellas una reserva de potencia que es adecuada para proporcionar la mayor de las cantidades de energía demandadas por período. Sin embargo, es posible demostrar que sólo se consigue cumplir con los requisitos impuestos por todos los plazos temporales con reservas de potencia extremadamente conservadoras. En esta tesis, para proporcionar un soporte más flexible y eficiente para diferentes tipos de tareas de tiempo real junto con el resto de tareas, se combina un mecanismo de planificación basado en prioridades con el encolado equitativo basado en energía. En esta clase de algoritmos, gracias al método introducido, que controla el tiempo que se ejecuta con prioridad una tarea de tiempo real, se puede establecer un compromiso entre el cumplimiento de los requisitos de tiempo real y el consumo de potencia. Para evaluar los algoritmos, se ha diseñado en SystemC un banco de pruebas. Los resultados muestran que el algoritmo de encolado equitativo basado en el consumo de energía consigue el balance entre el uso proporcional a la energía reservada y el cumplimiento de los requisitos de tiempo real.
Resumo:
The understanding of the structure and dynamics of the intricate network of connections among people that consumes products through Internet appears as an extremely useful asset in order to study emergent properties related to social behavior. This knowledge could be useful, for example, to improve the performance of personal recommendation algorithms. In this contribution, we analyzed five-year records of movie-rating transactions provided by Netflix, a movie rental platform where users rate movies from an online catalog. This dataset can be studied as a bipartite user-item network whose structure evolves in time. Even though several topological properties from subsets of this bipartite network have been reported with a model that combines random and preferential attachment mechanisms [Beguerisse Díaz et al., 2010], there are still many aspects worth to be explored, as they are connected to relevant phenomena underlying the evolution of the network. In this work, we test the hypothesis that bursty human behavior is essential in order to describe how a bipartite user-item network evolves in time. To that end, we propose a novel model that combines, for user nodes, a network growth prescription based on a preferential attachment mechanism acting not only in the topological domain (i.e. based on node degrees) but also in time domain. In the case of items, the model mixes degree preferential attachment and random selection. With these ingredients, the model is not only able to reproduce the asymptotic degree distribution, but also shows an excellent agreement with the Netflix data in several time-dependent topological properties.
Resumo:
Formation of the neuromuscular junction (NMJ) depends upon a nerve-derived protein, agrin, acting by means of a muscle-specific receptor tyrosine kinase, MuSK, as well as a required accessory receptor protein known as MASC. We report that MuSK does not merely play a structural role by demonstrating that MuSK kinase activity is required for inducing acetylcholine receptor (AChR) clustering. We also show that MuSK is necessary, and that MuSK kinase domain activation is sufficient, to mediate a key early event in NMJ formation—phosphorylation of the AChR. However, MuSK kinase domain activation and the resulting AChR phosphorylation are not sufficient for AChR clustering; thus we show that the MuSK ectodomain is also required. These results indicate that AChR phosphorylation is not the sole trigger of the clustering process. Moreover, our results suggest that, unlike the ectodomain of all other receptor tyrosine kinases, the MuSK ectodomain plays a required role in addition to simply mediating ligand binding and receptor dimerization, perhaps by helping to recruit NMJ components to a MuSK-based scaffold.