879 resultados para Response time (computer systems)
Resumo:
The electrochromic behavior of iron complexes derived from tetra-2-pyridyl-1,4-pyrazine (TPPZ) and a hexacyanoferrate species in polyelectrolytic multilayer adsorbed films is described for the first time. This complex macromolecule was deposited onto indium-tin oxide (ITO) substrates via self-assembly, and the morphology of the modified electrodes was studied using atomic force microscopy (AFM), which indicated that the hybrid film containing the polyelectrolyte multilayer and the iron complex was highly homogeneous and was approximately 50 nm thick. The modified electrodes exhibited excellent electrochromic behavior with both intense and persistent coloration as well as a chromatic contrast of approximately 70%. In addition, this system achieved high electrochromic efficiency (over 70 cm(2) C-1 at 630 nm) and a response time that could be measured in milliseconds. The electrode was cycled more than 10(3) times, indicating excellent stability.
Resumo:
Background: Ontologies have increasingly been used in the biomedical domain, which has prompted the emergence of different initiatives to facilitate their development and integration. The Open Biological and Biomedical Ontologies (OBO) Foundry consortium provides a repository of life-science ontologies, which are developed according to a set of shared principles. This consortium has developed an ontology called OBO Relation Ontology aiming at standardizing the different types of biological entity classes and associated relationships. Since ontologies are primarily intended to be used by humans, the use of graphical notations for ontology development facilitates the capture, comprehension and communication of knowledge between its users. However, OBO Foundry ontologies are captured and represented basically using text-based notations. The Unified Modeling Language (UML) provides a standard and widely-used graphical notation for modeling computer systems. UML provides a well-defined set of modeling elements, which can be extended using a built-in extension mechanism named Profile. Thus, this work aims at developing a UML profile for the OBO Relation Ontology to provide a domain-specific set of modeling elements that can be used to create standard UML-based ontologies in the biomedical domain. Results: We have studied the OBO Relation Ontology, the UML metamodel and the UML profiling mechanism. Based on these studies, we have proposed an extension to the UML metamodel in conformance with the OBO Relation Ontology and we have defined a profile that implements the extended metamodel. Finally, we have applied the proposed UML profile in the development of a number of fragments from different ontologies. Particularly, we have considered the Gene Ontology (GO), the PRotein Ontology (PRO) and the Xenopus Anatomy and Development Ontology (XAO). Conclusions: The use of an established and well-known graphical language in the development of biomedical ontologies provides a more intuitive form of capturing and representing knowledge than using only text-based notations. The use of the profile requires the domain expert to reason about the underlying semantics of the concepts and relationships being modeled, which helps preventing the introduction of inconsistencies in an ontology under development and facilitates the identification and correction of errors in an already defined ontology.
Resumo:
Abstract Background The study and analysis of gene expression measurements is the primary focus of functional genomics. Once expression data is available, biologists are faced with the task of extracting (new) knowledge associated to the underlying biological phenomenon. Most often, in order to perform this task, biologists execute a number of analysis activities on the available gene expression dataset rather than a single analysis activity. The integration of heteregeneous tools and data sources to create an integrated analysis environment represents a challenging and error-prone task. Semantic integration enables the assignment of unambiguous meanings to data shared among different applications in an integrated environment, allowing the exchange of data in a semantically consistent and meaningful way. This work aims at developing an ontology-based methodology for the semantic integration of gene expression analysis tools and data sources. The proposed methodology relies on software connectors to support not only the access to heterogeneous data sources but also the definition of transformation rules on exchanged data. Results We have studied the different challenges involved in the integration of computer systems and the role software connectors play in this task. We have also studied a number of gene expression technologies, analysis tools and related ontologies in order to devise basic integration scenarios and propose a reference ontology for the gene expression domain. Then, we have defined a number of activities and associated guidelines to prescribe how the development of connectors should be carried out. Finally, we have applied the proposed methodology in the construction of three different integration scenarios involving the use of different tools for the analysis of different types of gene expression data. Conclusions The proposed methodology facilitates the development of connectors capable of semantically integrating different gene expression analysis tools and data sources. The methodology can be used in the development of connectors supporting both simple and nontrivial processing requirements, thus assuring accurate data exchange and information interpretation from exchanged data.
Surface ecophysiological behavior across vegetation and moisture gradients in tropical South America
Resumo:
Surface ecophysiology at five sites in tropical South America across vegetation and moisture gradients is investigated. From the moist northwest (Manaus) to the relatively dry southeast (Pé de Gigante, state of São Paulo) simulated seasonal cycles of latent and sensible heat, and carbon flux produced with the Simple Biosphere Model (SiB3) are confronted with observational data. In the northwest, abundant moisture is available, suggesting that the ecosystem is light-limited. In these wettest regions, Bowen ratio is consistently low, with little or no annual cycle. Carbon flux shows little or no annual cycle as well; efflux and uptake are determined by high-frequency variability in light and moisture availability. Moving downgradient in annual precipitation amount, dry season length is more clearly defined. In these regions, a dry season sink of carbon is observed and simulated. This sink is the result of the combination of increased photosynthetic production due to higher light levels, and decreased respiratory efflux due to soil drying. The differential response time of photosynthetic and respiratory processes produce observed annual cycles of net carbon flux. In drier regions, moisture and carbon fluxes are in-phase; there is carbon uptake during seasonal rains and efflux during the dry season. At the driest site, there is also a large annual cycle in latent and sensible heat flux.
Resumo:
An important feature in computer systems developed for the agricultural sector is to satisfy the heterogeneity of data generated in different processes. Most problems related with this heterogeneity arise from the lack of standard for different computing solutions proposed. An efficient solution for that is to create a single standard for data exchange. The study on the actual process involved in cotton production was based on a research developed by the Brazilian Agricultural Research Corporation (EMBRAPA) that reports all phases as a result of the compilation of several theoretical and practical researches related to cotton crop. The proposition of a standard starts with the identification of the most important classes of data involved in the process, and includes an ontology that is the systematization of concepts related to the production of cotton fiber and results in a set of classes, relations, functions and instances. The results are used as a reference for the development of computational tools, transforming implicit knowledge into applications that support the knowledge described. This research is based on data from the Midwest of Brazil. The choice of the cotton process as a study case comes from the fact that Brazil is one of the major players and there are several improvements required for system integration in this segment.
Resumo:
Our understanding of the climate of northern Sweden during the late Holocene is largely dependent on proxy-data series. These datasets remain spatially and temporally sparse and instrumental series are rare prior to the mid 19th century. Nevertheless, the glaciology and paleo-glaciology of the region has a strong potential significance for the exploration of climate change scenarios, past and future. The aim of this thesis is to investigate the 19th and 20th century climate in the northern Swedish mountain range. This provides a good opportunity to analyse the natural variability of the climate before the onset of the industrial epoch. Developing a temporal understanding of fluctuations in glacier front positions and glacier mass balance that is linked to a better understanding of their interaction and relative significance to climate is fundamental in the assessment of past climate. I have chosen to investigate previously unexplored temperature data from northern Sweden from between 1802 and 1860 and combined it with a temperature series from a synoptic station in Haparanda, which began operation in 1859, in order to create a reliable long temperature series for the period 1802 to 2002. I have also investigated two different glaciers, Pårteglaciären and Salajekna, which are located in different climatic environments. These glaciers have, from a Swedish perspective, long observational records. Furthermore, I have investigated a recurring jökulhlaup at the glacier Sälkaglaciären in order to analyse glacier-climate relationships with respect to the jökulhlaups. A number of datasets are presented, including: glacier frontal changes, in situ and photogrammetric mass balance data, in situ and satellite radar interferometry measurements of surface velocity, radar measurements, ice volume data and a temperature series. All these datasets are analysed in order to investigate the response of the glaciers to climatic stimuli, to attribute specific behaviour to particular climates and to analyse the 19th and 20th century glacier/climate relationships in northern Sweden. The 19th century was characterized by cold conditions in northern Sweden, particularly in winter. Significant changes in the amplitude of the annual temperature cycle are evident. Through the 19th century there is a marked decreasing trend in the amplitude of the data, suggesting a change towards a prevalence of maritime (westerly) air masses, something which has characterised the 20th century. The investigations on Salajekna support the conclusion that the major part of the 19th century was cold and dry. The 19th century advance of Salajekna was probably caused by colder climate in the late 18th and early 19th centuries, coupled with a weakening of the westerly airflow. The investigations on Pårteglaciären show that the glacier has a response time of ~200 years. It also suggests that there was a relatively high frequency of easterly winds providing the glacier with winter precipitation during the 19th century. Glaciers have very different response times and are sensitive to different climatic parameters. Glaciers in rather continental areas of the Subarctic and Arctic can have very long response times because of mass balance considerations and not primarily the glacier dynamics. This is of vital importance for analyzing Arctic and Subarctic glacier behaviour in a global change perspective. It is far from evident that the behaviour of the glacier fronts today reflects the present climate.
Resumo:
[EN] The purpose of this investigation was to determine the contribution of muscle O(2) consumption (mVO2) to pulmonary O(2) uptake (pVO2) during both low-intensity (LI) and high-intensity (HI) knee-extension exercise, and during subsequent recovery, in humans. Seven healthy male subjects (age 20-25 years) completed a series of LI and HI square-wave exercise tests in which mVO2 (direct Fick technique) and pVO2 (indirect calorimetry) were measured simultaneously. The mean blood transit time from the muscle capillaries to the lung (MTTc-l) was also estimated (based on measured blood transit times from femoral artery to vein and vein to artery). The kinetics of mVO2 and pVO2 were modelled using non-linear regression. The time constant (tau) describing the phase II pVO2 kinetics following the onset of exercise was not significantly different from the mean response time (initial time delay + tau) for mVO2 kinetics for LI (30 +/- 3 vs 30 +/- 3 s) but was slightly higher (P < 0.05) for HI (32 +/- 3 vs 29 +/- 4 s); the responses were closely correlated (r = 0.95 and r = 0.95; P < 0.01) for both intensities. In recovery, agreement between the responses was more limited both for LI (36 +/- 4 vs 18 +/- 4 s, P < 0.05; r = -0.01) and HI (33 +/- 3 vs 27 +/- 3 s, P > 0.05; r = -0.40). MTTc-l was approximately 17 s just before exercise and decreased to 12 and 10 s after 5 s of exercise for LI and HI, respectively. These data indicate that the phase II pVO2 kinetics reflect mVO2 kinetics during exercise but not during recovery where caution in data interpretation is advised. Increased mVO2 probably makes a small contribution to during the first 15-20 s of exercise.
Resumo:
Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)
Resumo:
CHAPTER 1:FLUID-VISCOUS DAMPERS In this chapter the fluid-viscous dampers are introduced. The first section is focused on the technical characteristics of these devices, their mechanical behavior and the latest evolution of the technology whose they are equipped. In the second section we report the definitions and the guide lines about the design of these devices included in some international codes. In the third section the results of some experimental tests carried out by some authors on the response of these devices to external forces are discussed. On this purpose we report some technical schedules that are usually enclosed to the devices now available on the international market. In the third section we show also some analytic models proposed by various authors, which are able to describe efficiently the physical behavior of the fluid-viscous dampers. In the last section we propose some cases of application of these devices on existing structures and on new-construction structures. We show also some cases in which these devices have been revealed good for aims that lies outside the reduction of seismic actions on the structures. CHAPTER 2:DESIGN METHODS PROPOSED IN LITERATURE In this chapter the more widespread design methods proposed in literature for structures equipped by fluid-viscous dampers are introduced. In the first part the response of sdf systems in the case of harmonic external force is studied, in the last part the response in the case of random external force is discussed. In the first section the equations of motion in the case of an elastic-linear sdf system equipped with a non-linear fluid-viscous damper undergoing a harmonic force are introduced. This differential problem is analytically quite complex and it’s not possible to be solved in a closed form. Therefore some authors have proposed approximate solution methods. The more widespread methods are based on equivalence principles between a non-linear device and an equivalent linear one. Operating in this way it is possible to define an equivalent damping ratio and the problem becomes linear; the solution of the equivalent problem is well-known. In the following section two techniques of linearization, proposed by some authors in literature, are described: the first technique is based on the equivalence of the energy dissipated by the two devices and the second one is based on the equivalence of power consumption. After that we compare these two techniques by studying the response of a sdf system undergoing a harmonic force. By introducing the equivalent damping ratio we can write the equation of motion of the non-linear differential problem in an implicit form, by dividing, as usual, for the mass of the system. In this way, we get a reduction of the number of variables, by introducing the natural frequency of the system. The equation of motion written in this form has two important properties: the response is linear dependent on the amplitude of the external force and the response is dependent on the ratio of the frequency of the external harmonic force and the natural frequency of the system only, and not on their single values. All these considerations, in the last section, are extended to the case of a random external force. CHAPTER 3: DESIGN METHOD PROPOSED In this chapter the theoretical basis of the design method proposed are introduced. The need to propose a new design method for structures equipped with fluid-viscous dampers arises from the observation that the methods reported in literature are always iterative, because the response affects some parameters included in the equation of motion (such as the equivalent damping ratio). In the first section the dimensionless parameterε is introduced. This parameter has been obtained from the definition of equivalent damping ratio. The implicit form of the equation of motion is written by introducing the parameter ε, instead of the equivalent damping ratio. This new implicit equation of motions has not any terms affected by the response, so that once ε is known the response can be evaluated directly. In the second section it is discussed how the parameter ε affects some characteristics of the response: drift, velocity and base shear. All the results described till this point have been obtained by keeping the non-linearity of the behavior of the dampers. In order to get a linear formulation of the problem, that is possible to solve by using the well-known methods of the dynamics of structures, as we did before for the iterative methods by introducing the equivalent damping ratio, it is shown how the equivalent damping ratio can be evaluated from knowing the value of ε. Operating in this way, once the parameter ε is known, it is quite easy to estimate the equivalent damping ratio and to proceed with a classic linear analysis. In the last section it is shown how the parameter ε could be taken as reference for the evaluation of the convenience of using non-linear dampers instead of linear ones on the basis of the type of external force and the characteristics of the system. CHAPTER 4: MULTI-DEGREE OF FREEDOM SYSTEMS In this chapter the design methods of a elastic-linear mdf system equipped with non-linear fluidviscous dampers are introduced. It has already been shown that, in the sdf systems, the response of the structure can be evaluated through the estimation of the equivalent damping ratio (ξsd) assuming the behavior of the structure elastic-linear. We would to mention that some adjusting coefficients, to be applied to the equivalent damping ratio in order to consider the actual behavior of the structure (that is non-linear), have already been proposed in literature; such coefficients are usually expressed in terms of ductility, but their treatment is over the aims of this thesis and we does not go into further. The method usually proposed in literature is based on energy equivalence: even though this procedure has solid theoretical basis, it must necessary include some iterative process, because the expression of the equivalent damping ratio contains a term of the response. This procedure has been introduced primarily by Ramirez, Constantinou et al. in 2000. This procedure is reported in the first section and it is defined “Iterative Method”. Following the guide lines about sdf systems reported in the previous chapters, it is introduced a procedure for the assessment of the parameter ε in the case of mdf systems. Operating in this way the evaluation of the equivalent damping ratio (ξsd) can be done directly without implementing iterative processes. This procedure is defined “Direct Method” and it is reported in the second section. In the third section the two methods are analyzed by studying 4 cases of two moment-resisting steel frames undergoing real accelerogramms: the response of the system calculated by using the two methods is compared with the numerical response obtained from the software called SAP2000-NL, CSI product. In the last section a procedure to create spectra of the equivalent damping ratio, affected by the parameter ε and the natural period of the system for a fixed value of exponent α, starting from the elasticresponse spectra provided by any international code, is introduced.
Resumo:
The impact of plasma technologies is growing both in the academic and in the industrial fields. Nowadays, a great interest is focused in plasma applications in aeronautics and astronautics domains. Plasma actuators based on the Magneto-Hydro-Dynamic (MHD) and Electro- Hydro-Dynamic (EHD) interactions are potentially able to suitably modify the fluid-dynamics characteristics around a flying body without utilizing moving parts. This could lead to the control of an aircraft with negligible response time, more reliability and improvements of the performance. In order to study the aforementioned interactions, a series of experiments and a wide number of diagnostic techniques have been utilized. The EHD interaction, realized by means of a Dielectric Barrier Discharge (DBD) actuator, and its impact on the boundary layer have been evaluated by means of two different experiments. In the first one a three phase multi-electrode flat panel actuator is used. Different external flow velocities (from 1 to 20m/s) and different values of the supplied voltage and frequency have been considered. Moreover a change of the phase sequence has been done to verify the influence of the electric field existing between successive phases. Measurements of the induced speed had shown the effect of the supply voltage and the frequency, and the phase order in the momentum transfer phenomenon. Gains in velocity, inside the boundary layer, of about 5m/s have been obtained. Spectroscopic measurements allowed to determine the rotational and the vibrational temperature of the plasma which lie in the range of 320 ÷ 440°K and of 3000 ÷ 3900°K respectively. A deviation from thermodynamic equilibrium had been found. The second EHD experiment is realized on a single electrode pair DBD actuator driven by nano-pulses superimposed to a DC or an AC bias. This new supply system separates the plasma formation mechanism from the acceleration action on the fluid, leading to an higher degree of the control of the process. Both the voltage and the frequency of the nano-pulses and the amplitude and the waveform of the bias have been varied during the experiment. Plasma jets and vortex behavior had been observed by means of fast Schlieren imaging. This allowed a deeper understanding of the EHD interaction process. A velocity increase in the boundary layer of about 2m/s had been measured. Thrust measurements have been performed by means of a scales and compared with experimental data reported in the literature. For similar voltage amplitudes thrust larger than those of the literature, had been observed. Surface charge measurements led to realize a modified DBD actuator able to obtain similar performances when compared with that of other experiments. However in this case a DC bias replacing the AC bias had been used. MHD interaction experiments had been carried out in a hypersonic wind tunnel in argon with a flow of Mach 6. Before the MHD experiments a thermal, fluid-dynamic and plasma characterization of the hypersonic argon plasma flow have been done. The electron temperature and the electron number density had been determined by means of emission spectroscopy and microwave absorption measurements. A deviation from thermodynamic equilibrium had been observed. The electron number density showed to be frozen at the stagnation region condition in the expansion through the nozzle. MHD experiments have been performed using two axial symmetric test bodies. Similar magnetic configurations were used. Permanent magnets inserted into the test body allowed to generate inside the plasma azimuthal currents around the conical shape of the body. These Faraday currents are responsible of the MHD body force which acts against the flow. The MHD interaction process has been observed by means of fast imaging, pressure and electrical measurements. Images showed bright rings due to the Faraday currents heating and exciting the plasma particles. Pressure measurements showed increases of the pressure in the regions where the MHD interaction is large. The pressure is 10 to 15% larger than when the MHD interaction process is silent. Finally by means of electrostatic probes mounted flush on the test body lateral surface Hall fields of about 500V/m had been measured. These results have been used for the validation of a numerical MHD code.
Resumo:
In the present work qualitative aspects of products that fall outside the classic Italian of food production view will be investigated, except for the apricot, a fruit, however, less studied by the methods considered here. The development of computer systems and the advanced software systems dedicated for statistical processing of data, has permitted the application of advanced technologies including the analysis of niche products. The near-infrared spectroscopic analysis was applied to the chemical industry for over twenty years and, subsequently, was applied in food industry with great success for non-destructive in line and off-line analysis. The work that will be presented below range from the use of spectroscopy for the determination of some rheological indices of ice cream applications to the characterization of the main quality indices of apricots, fresh dates, determination of the production areas of pistachio. Next to the spectroscopy will be illustrated different methods of multivariate analysis for spectra interpretation or for the construction of qualitative models of estimation. The thesis is divided into four separate studies that consider the same number of products. Each one of it is introduced by its own premise and ended with its own bibliography. This studies are preceded by a general discussion on the state of art and the basics of NIR spectroscopy.
Resumo:
In computer systems, specifically in multithread, parallel and distributed systems, a deadlock is both a very subtle problem - because difficult to pre- vent during the system coding - and a very dangerous one: a deadlocked system is easily completely stuck, with consequences ranging from simple annoyances to life-threatening circumstances, being also in between the not negligible scenario of economical losses. Then, how to avoid this problem? A lot of possible solutions has been studied, proposed and implemented. In this thesis we focus on detection of deadlocks with a static program analysis technique, i.e. an analysis per- formed without actually executing the program. To begin, we briefly present the static Deadlock Analysis Model devel- oped for coreABS−− in chapter 1, then we proceed by detailing the Class- based coreABS−− language in chapter 2. Then, in Chapter 3 we lay the foundation for further discussions by ana- lyzing the differences between coreABS−− and ASP, an untyped Object-based calculi, so as to show how it can be possible to extend the Deadlock Analysis to Object-based languages in general. In this regard, we explicit some hypotheses in chapter 4 first by present- ing a possible, unproven type system for ASP, modeled after the Deadlock Analysis Model developed for coreABS−−. Then, we conclude our discussion by presenting a simpler hypothesis, which may allow to circumvent the difficulties that arises from the definition of the ”ad-hoc” type system discussed in the aforegoing chapter.
Resumo:
Gastrointestinal stromal tumors (GISTs) are the most common mesenchymal tumors in the gastrointestinal tract. This work considers the pharmacological response in GIST patients treated with imatinib by two different angles: the genetic and somatic point of view. We analyzed polymorphisms influence on treatment outcome, keeping in consideration SNPs in genes involved in drug transport and folate pathway. Naturally, all these intriguing results cannot be considered as the only main mechanism in imatinib response. GIST mainly depends by oncogenic gain of function mutations in tyrosin kinase receptor genes, KIT or PDGFRA, and the mutational status of these two genes or acquisition of secondary mutation is considered the main player in GIST development and progression. To this purpose we analyzed the secondary mutations to better understand how these are involved in imatinib resistance. In our analysis we considered both imatinib and the second line treatment, sunitinib, in a subset of progressive patients. KIT/PDGFRA mutation analysis is an important tool for physicians, as specific mutations may guide therapeutic choices. Currently, the only adaptations in treatment strategy include imatinib starting dose of 800 mg/daily in KIT exon-9-mutated GISTs. In the attempt to individualize treatment, genetic polymorphisms represent a novelty in the definition of biomarkers of imatinib response in addition to the use of tumor genotype. Accumulating data indicate a contributing role of pharmacokinetics in imatinib efficacy, as well as initial response, time to progression and acquired resistance. At the same time it is becoming evident that genetic host factors may contribute to the observed pharmacokinetic inter-patient variability. Genetic polymorphisms in transporters and metabolism may affect the activity or stability of the encoded enzymes. Thus, integrating pharmacogenetic data of imatinib transporters and metabolizing genes, whose interplay has yet to be fully unraveled, has the potential to provide further insight into imatinib response/resistance mechanisms.
Resumo:
La prova informatica richiede l’adozione di precauzioni come in un qualsiasi altro accertamento scientifico. Si fornisce una panoramica sugli aspetti metodologici e applicativi dell’informatica forense alla luce del recente standard ISO/IEC 27037:2012 in tema di trattamento del reperto informatico nelle fasi di identificazione, raccolta, acquisizione e conservazione del dato digitale. Tali metodologie si attengono scrupolosamente alle esigenze di integrità e autenticità richieste dalle norme in materia di informatica forense, in particolare della Legge 48/2008 di ratifica della Convenzione di Budapest sul Cybercrime. In merito al reato di pedopornografia si offre una rassegna della normativa comunitaria e nazionale, ponendo l’enfasi sugli aspetti rilevanti ai fini dell’analisi forense. Rilevato che il file sharing su reti peer-to-peer è il canale sul quale maggiormente si concentra lo scambio di materiale illecito, si fornisce una panoramica dei protocolli e dei sistemi maggiormente diffusi, ponendo enfasi sulla rete eDonkey e il software eMule che trovano ampia diffusione tra gli utenti italiani. Si accenna alle problematiche che si incontrano nelle attività di indagine e di repressione del fenomeno, di competenza delle forze di polizia, per poi concentrarsi e fornire il contributo rilevante in tema di analisi forensi di sistemi informatici sequestrati a soggetti indagati (o imputati) di reato di pedopornografia: la progettazione e l’implementazione di eMuleForensic consente di svolgere in maniera estremamente precisa e rapida le operazioni di analisi degli eventi che si verificano utilizzando il software di file sharing eMule; il software è disponibile sia in rete all’url http://www.emuleforensic.com, sia come tool all’interno della distribuzione forense DEFT. Infine si fornisce una proposta di protocollo operativo per l’analisi forense di sistemi informatici coinvolti in indagini forensi di pedopornografia.