943 resultados para Developers of Java system
Resumo:
À la fin du 19e siècle, Dr. Ramón y Cajal, un pionnier scientifique, a découvert les éléments cellulaires individuels, appelés neurones, composant le système nerveux. Il a également remarqué la complexité de ce système et a mentionné l’impossibilité de ces nouveaux neurones à être intégrés dans le système nerveux adulte. Une de ses citations reconnues : “Dans les centres adultes, les chemins nerveux sont fixes, terminés, immuables. Tout doit mourir, rien ne peut être régénérer” est représentative du dogme de l’époque (Ramón y Cajal 1928). D’importantes études effectuées dans les années 1960-1970 suggèrent un point de vue différent. Il a été démontré que les nouveaux neurones peuvent être générés à l’âge adulte, mais cette découverte a créé un scepticisme omniprésent au sein de la communauté scientifique. Il a fallu 30 ans pour que le concept de neurogenèse adulte soit largement accepté. Cette découverte, en plus de nombreuses avancées techniques, a ouvert la porte à de nouvelles cibles thérapeutiques potentielles pour les maladies neurodégénératives. Les cellules souches neurales (CSNs) adultes résident principalement dans deux niches du cerveau : la zone sous-ventriculaire des ventricules latéraux et le gyrus dentelé de l’hippocampe. En condition physiologique, le niveau de neurogenèse est relativement élevé dans la zone sous-ventriculaire contrairement à l’hippocampe où certaines étapes sont limitantes. En revanche, la moelle épinière est plutôt définie comme un environnement en quiescence. Une des principales questions qui a été soulevée suite à ces découvertes est : comment peut-on activer les CSNs adultes afin d’augmenter les niveaux de neurogenèse ? Dans l’hippocampe, la capacité de l’environnement enrichi (incluant la stimulation cognitive, l’exercice et les interactions sociales) à promouvoir la neurogenèse hippocampale a déjà été démontrée. La plasticité de cette région est importante, car elle peut jouer un rôle clé dans la récupération de déficits au niveau de la mémoire et l’apprentissage. Dans la moelle épinière, des études effectuées in vitro ont démontré que les cellules épendymaires situées autour du canal central ont des capacités d’auto-renouvellement et de multipotence (neurones, astrocytes, oligodendrocytes). Il est intéressant de noter qu’in vivo, suite à une lésion de la moelle épinière, les cellules épendymaires sont activées, peuvent s’auto-renouveller, mais peuvent seulement ii donner naissance à des cellules de type gliale (astrocytes et oligodendrocytes). Cette nouvelle fonction post-lésion démontre que la plasticité est encore possible dans un environnement en quiescence et peut être exploité afin de développer des stratégies de réparation endogènes dans la moelle épinière. Les CSNs adultes jouent un rôle important dans le maintien des fonctions physiologiques du cerveau sain et dans la réparation neuronale suite à une lésion. Cependant, il y a peu de données sur les mécanismes qui permettent l'activation des CSNs en quiescence permettant de maintenir ces fonctions. L'objectif général est d'élucider les mécanismes sous-jacents à l'activation des CSNs dans le système nerveux central adulte. Pour répondre à cet objectif, nous avons mis en place deux approches complémentaires chez les souris adultes : 1) L'activation des CSNs hippocampales par l'environnement enrichi (EE) et 2) l'activation des CSNs de la moelle épinière par la neuroinflammation suite à une lésion. De plus, 3) afin d’obtenir plus d’information sur les mécanismes moléculaires de ces modèles, nous utiliserons des approches transcriptomiques afin d’ouvrir de nouvelles perspectives. Le premier projet consiste à établir de nouveaux mécanismes cellulaires et moléculaires à travers lesquels l’environnement enrichi module la plasticité du cerveau adulte. Nous avons tout d’abord évalué la contribution de chacune des composantes de l’environnement enrichi à la neurogenèse hippocampale (Chapitre II). L’exercice volontaire promeut la neurogenèse, tandis que le contexte social augmente l’activation neuronale. Par la suite, nous avons déterminé l’effet de ces composantes sur les performances comportementales et sur le transcriptome à l’aide d’un labyrinthe radial à huit bras afin d’évaluer la mémoire spatiale et un test de reconnaissante d’objets nouveaux ainsi qu’un RNA-Seq, respectivement (Chapitre III). Les coureurs ont démontré une mémoire spatiale de rappel à court-terme plus forte, tandis que les souris exposées aux interactions sociales ont eu une plus grande flexibilité cognitive à abandonner leurs anciens souvenirs. Étonnamment, l’analyse du RNA-Seq a permis d’identifier des différences claires dans l’expression des transcripts entre les coureurs de courte et longue distance, en plus des souris sociales (dans l’environnement complexe). iii Le second projet consiste à découvrir comment les cellules épendymaires acquièrent les propriétés des CSNs in vitro ou la multipotence suite aux lésions in vivo (Chapitre IV). Une analyse du RNA-Seq a révélé que le transforming growth factor-β1 (TGF-β1) agit comme un régulateur, en amont des changements significatifs suite à une lésion de la moelle épinière. Nous avons alors confirmé la présence de cette cytokine suite à la lésion et caractérisé son rôle sur la prolifération, différentiation, et survie des cellules initiatrices de neurosphères de la moelle épinière. Nos résultats suggèrent que TGF-β1 régule l’acquisition et l’expression des propriétés de cellules souches sur les cellules épendymaires provenant de la moelle épinière.
Resumo:
The study of immune system aging, i.e. immunosenescence, is a relatively new research topic. It deals with understanding the processes of immuno-degradation that indicate signs of functionality loss possibly leading to death. Even though it is not possible to prevent immunosenescence, there is great benefit in comprehending its causes, which may help to reverse some of the damage done and thus improve life expectancy. One of the main factors influencing the process of immunosenescence is the number and phenotypical variety of naive T cells in an individual. This work presents a review of immunosenescence, proposes system dynamics modelling of the processes involving the maintenance of the naive T cell repertoire and presents some preliminary results.
Resumo:
Tese de dout., Faculdade de Ciências do Mar e Ambiente, Univ. do Algarve, 2003
Resumo:
Laser speckle contrast imaging (LSCI) has the potential to be a powerful tool in medicine, but more research in the field is required so it can be used properly. To help in the progression of Michigan Tech's research in the field, a graphical user interface (GUI) was designed in Matlab to control the instrumentation of the experiments as well as process the raw speckle images into contrast images while they are being acquired. The design of the system was successful and is currently being used by Michigan Tech's Biomedical Engineering department. This thesis describes the development of the LSCI GUI as well as offering a full introduction into the history, theory and applications of LSCI.
Resumo:
Climate change, intensive use, and population growth are threatening the availability of water resources. New sources of water, better knowledge of existing ones, and improved water management strategies are of paramount importance. Ground water is often considered as primary water source due to its advantages in terms of quantity, spatial distribution, and natural quality. Remote sensing techniques afford scientists a unique opportunity to characterize landscapes in order to assess groundwater resources, particularly in tectonically influenced areas. Aquifers in volcanic basins are considered the most productive aquifers in Latin America. Although topography is considered the primary driving force for groundwater flows in mountainous terrains, tectonic activity increases the complexity of these groundwater systems by altering the integrity of sedimentary rock units and the overlying drainage networks. Structural controls affect the primary hydraulic properties of the rock formations by developing barriers to flow in some cases and zones of preferential infiltration and subterranean in others. The study area focuses on the Quito Aquifer System (QAS) in Ecuador. The characterization of the hydrogeology started with a lineament analysis based on a combined remote sensing and digital terrain analysis approach. The application of classical tools for regional hydrogeological evaluation and shallow geophysical methods were useful to evaluate the impact of faulting and fracturing on the aquifer system. Given the spatial extension of the area and the complexity of the system, two levels of analysis were applied in this study. At the regional level, a lineament map was created for the QAS. Relationships between fractures, faults and lineaments and the configuration of the groundwater flow on the QAS were determined. At the local level, on the Plateaus region of the QAS, a detailed lineament map was obtained by using high-spatial-resolution satellite imagery and aspect map derived from a digital elevation model (DEM). This map was complemented by the analysis of morphotectonic indicators and shallow geophysics that characterize fracture patterns. The development of the groundwater flow system was studied, drawing upon data pertaining to the aquifer system physical characteristics and topography. Hydrochemistry was used to ascertain the groundwater evolution and verify the correspondence of the flow patterns proposed in the flow system analysis. Isotopic analysis was employed to verify the origin of groundwater. The results of this study show that tectonism plays a very important role for the hydrology of the QAS. The results also demonstrate that faults influence a great deal of the topographic characteristics of the QAS and subsequently the configuration of the groundwater flow. Moreover, for the Plateaus region, the results demonstrate that the aquifer flow systems are affected by secondary porosity. This is a new conceptualization of the functioning of the aquifers on the QAS that will significantly contribute to the development of better strategies for the management of this important water resource.
Resumo:
The first goal of this study is to analyse a real-world multiproduct onshore pipeline system in order to verify its hydraulic configuration and operational feasibility by constructing a simulation model step by step from its elementary building blocks that permits to copy the operation of the real system as precisely as possible. The second goal is to develop this simulation model into a user-friendly tool that one could use to find an “optimal” or “best” product batch schedule for a one year time period. Such a batch schedule could change dynamically as perturbations occur during operation that influence the behaviour of the entire system. The result of the simulation, the ‘best’ batch schedule is the one that minimizes the operational costs in the system. The costs involved in the simulation are inventory costs, interface costs, pumping costs, and penalty costs assigned to any unforeseen situations. The key factor to determine the performance of the simulation model is the way time is represented. In our model an event based discrete time representation is selected as most appropriate for our purposes. This means that the time horizon is divided into intervals of unequal lengths based on events that change the state of the system. These events are the arrival/departure of the tanker ships, the openings and closures of loading/unloading valves of storage tanks at both terminals, and the arrivals/departures of trains/trucks at the Delivery Terminal. In the feasibility study we analyse the system’s operational performance with different Head Terminal storage capacity configurations. For these alternative configurations we evaluated the effect of different tanker ship delay magnitudes on the number of critical events and product interfaces generated, on the duration of pipeline stoppages, the satisfaction of the product demand and on the operative costs. Based on the results and the bottlenecks identified, we propose modifications in the original setup.
Resumo:
The Mediterranean silvo-pastoral system known as Montado, in Portugal, is a complex land use system composed of an open tree stratum in various densities and an herbaceous layer, used for livestock grazing. Livestock also profit from the acorns, and the grazing contributes to avoid shrub encroachment. In the last 20 years, subsidies from the European Union have greatly promoted cattle rearing in this system and the introduction of heavy breeds, at the expense of sheep, goats or the native cattle breeds. The balance of the traditional system is thus threatened, and a precise assessment of the balance between the different components of the system, therefore is highly needed. The goal of this study was to gain a better under- standing of a Montado farm system with cattle rearing as the major economic activity by applying the emergy evaluation method to calculate indices of yield, investment, environmental loading and sustainability. By integrating different ecosystem components, the emergy evaluation method allows a comprehensive evaluation of this complex and multifunctional system at the scale of an individual farm. This method provides a set of indices that can help us understand the system and design management strategies that maximize emergy flow in the farm. In this paper, we apply the emergy evaluation method to a Montado farm with cattle rearing, as a way to gain a better understanding of this system at the farm scale. The value for the transformity of veal (2.66E?06 sej J-1) is slightly higher, when compared to other systems producing protein. That means that the investment of nature and man in this product was higher and it requires a premium price on the market. The renewa- bility for Holm Oaks Farm (49 %), lower than for other similar systems, supports the assumption that this is a farm in which, comparing with others, the number of purchased inputs in relation to renewable inputs provided by nature, is higher. The Emergy Investment Ratio is 0.91 for cattle rearing compared to a value of 0.49 for cork and 0.43 for firewood harvesting, making it clear that cattle rearing is a more labor demanding activity comparing with extractive activities as cork and firewood harvesting.
Resumo:
Internet of Things systems are pervasive systems evolved from cyber-physical to large-scale systems. Due to the number of technologies involved, software development involves several integration challenges. Among them, the ones preventing proper integration are those related to the system heterogeneity, and thus addressing interoperability issues. From a software engineering perspective, developers mostly experience the lack of interoperability in the two phases of software development: programming and deployment. On the one hand, modern software tends to be distributed in several components, each adopting its most-appropriate technology stack, pushing programmers to code in a protocol- and data-agnostic way. On the other hand, each software component should run in the most appropriate execution environment and, as a result, system architects strive to automate the deployment in distributed infrastructures. This dissertation aims to improve the development process by introducing proper tools to handle certain aspects of the system heterogeneity. Our effort focuses on three of these aspects and, for each one of those, we propose a tool addressing the underlying challenge. The first tool aims to handle heterogeneity at the transport and application protocol level, the second to manage different data formats, while the third to obtain optimal deployment. To realize the tools, we adopted a linguistic approach, i.e.\ we provided specific linguistic abstractions that help developers to increase the expressive power of the programming language they use, writing better solutions in more straightforward ways. To validate the approach, we implemented use cases to show that the tools can be used in practice and that they help to achieve the expected level of interoperability. In conclusion, to move a step towards the realization of an integrated Internet of Things ecosystem, we target programmers and architects and propose them to use the presented tools to ease the software development process.
Resumo:
The present thesis is focused on wave energy, which is a particular kind of ocean energy, and is based on the activity carried out during the EU project SEA TITAN. The main scope of this work is the design of a power electronic section for an innovative wave energy extraction system based on a switched-reluctance machine. In the first chapter, the general features of marine wave energy harvesting are treated. The concept of Wave Energy Converter (WEC) is introduced as well as the mathematical description of the waves, their characterization and measurement, the WEC classification, the operating principles and the standardization framework. Also, detailed considerations on the environmental impact are presented. The SEA TITAN project is briefly described. The second chapter is dedicated to the technical issues of the SEA TITAN project, such as the operating principle, the performance optimization carried out in the project, the main innovations as well as interesting demonstrations on the behavior of the generator and its control. In the third chapter, the power electronics converters of SEA TITAN are described, and the design choices, procedures and calculations are shown, with a further insight into the application given by analyzing the MATLAB Simulink model of the system and its control scheme. Experimental tests are reported in the fourth chapter, with graphs and illustrations of the power electronic apparatus interfaced with the real machine. Finally, the conclusion in the fifth chapter offers a global overview of the project and opens further development pathways.
Resumo:
The work described in this Master’s Degree thesis was born after the collaboration with the company Maserati S.p.a, an Italian luxury car maker with its headquarters located in Modena, in the heart of the Italian Motor Valley, where I worked as a stagiaire in the Virtual Engineering team between September 2021 and February 2022. This work proposes the validation using real-world ECUs of a Driver Drowsiness Detection (DDD) system prototype based on different detection methods with the goal to overcome input signal losses and system failures. Detection methods of different categories have been chosen from literature and merged with the goal of utilizing the benefits of each of them, overcoming their limitations and limiting as much as possible their degree of intrusiveness to prevent any kind of driving distraction: an image processing-based technique for human physical signals detection as well as methods based on driver-vehicle interaction are used. A Driver-In-the-Loop simulator is used to gather real data on which a Machine Learning-based algorithm will be trained and validated. These data come from the tests that the company conducts in its daily activities so confidential information about the simulator and the drivers will be omitted. Although the impact of the proposed system is not remarkable and there is still work to do in all its elements, the results indicate the main advantages of the system in terms of robustness against subsystem failures and signal losses.
Resumo:
This thesis aims to illustrate the construction of a mathematical model of a hydraulic system, oriented to the design of a model predictive control (MPC) algorithm. The modeling procedure starts with the basic formulation of a piston-servovalve system. The latter is a complex non linear system with some unknown and not measurable effects that constitute a challenging problem for the modeling procedure. The first level of approximation for system parameters is obtained basing on datasheet informations, provided workbench tests and other data from the company. Then, to validate and refine the model, open-loop simulations have been made for data matching with the characteristics obtained from real acquisitions. The final developed set of ODEs captures all the main peculiarities of the system despite some characteristics due to highly varying and unknown hydraulic effects, like the unmodeled resistive elements of the pipes. After an accurate analysis, since the model presents many internal complexities, a simplified version is presented. The latter is used to linearize and discretize correctly the non linear model. Basing on that, a MPC algorithm for reference tracking with linear constraints is implemented. The results obtained show the potential of MPC in this kind of industrial applications, thus a high quality tracking performances while satisfying state and input constraints. The increased robustness and flexibility are evident with respect to the standard control techniques, such as PID controllers, adopted for these systems. The simulations for model validation and the controlled system have been carried out in a Python code environment.
Resumo:
From 2010, the Proton Radius has become one of the most interest value to determine. The first proof of not complete understanding of its internal structure was the measurement of the Lamb Shift using the muonic hydrogen, leading to a value 7σ lower. A new road so was open and the Proton Radius Puzzle epoch begun. FAMU Experiment is a project that tries to give an answer to this Puzzle implementing high precision experimental apparatus. The work of this thesis is based on the study, construction and first characterization of a new detection system. Thanks to the previous experiments and simulations, this apparatus is composed by 17 detectors positioned on a semicircular crown with the related electronic circuit. The detectors' characterization is based on the use of a LabView program controlling a digital potentiometer and on other two analog potentiometers, all three used to set the amplitude of each detector to a predefined value, around 1.2 V, set on the oscilloscope by which is possible to observe the signal. This is the requirement in order to have, in the final measurement, a single high peak given by the sum of all the signals coming from the detectors. Each signal has been acquired for almost half of an hour, but the entire circuit has been maintained active for more time to observe its capacity to work for longer periods. The principal results of this thesis are given by the spectra of 12 detectors and the corresponding values of Voltages, FWHM and Resolution. The outcomes of the acquisitions show also another expected behavior: the strong dependence of the detectors from the temperature, demonstrating that an its change causes fluctuations in the signal. In turn, these fluctuations will affect the spectrum, resulting in a shifting of the curve and a lower Resolution. On the other hand, a measurement performed in stable conditions will lead to accordance between the nominal and experimental measurements, as for the detectors 10, 11 and 12 of our system.
Resumo:
Lipidic mixtures present a particular phase change profile highly affected by their unique crystalline structure. However, classical solid-liquid equilibrium (SLE) thermodynamic modeling approaches, which assume the solid phase to be a pure component, sometimes fail in the correct description of the phase behavior. In addition, their inability increases with the complexity of the system. To overcome some of these problems, this study describes a new procedure to depict the SLE of fatty binary mixtures presenting solid solutions, namely the Crystal-T algorithm. Considering the non-ideality of both liquid and solid phases, this algorithm is aimed at the determination of the temperature in which the first and last crystal of the mixture melts. The evaluation is focused on experimental data measured and reported in this work for systems composed of triacylglycerols and fatty alcohols. The liquidus and solidus lines of the SLE phase diagrams were described by using excess Gibbs energy based equations, and the group contribution UNIFAC model for the calculation of the activity coefficients of both liquid and solid phases. Very low deviations of theoretical and experimental data evidenced the strength of the algorithm, contributing to the enlargement of the scope of the SLE modeling.
Resumo:
Plackett-Burman experimental design was applied for the robustness assessment of GC×GC-qMS (Comprehensive Two-Dimensional Gas Chromatography with Fast Quadrupolar Mass Spectrometric Detection) in quantitative and qualitative analysis of volatiles compounds from chocolate samples isolated by headspace solid-phase microextraction (HS-SPME). The influence of small changes around the nominal level of six factors deemed as important on peak areas (carrier gas flow rate, modulation period, temperature of ionic source, MS photomultiplier power, injector temperature and interface temperature) and of four factors considered as potentially influential on spectral quality (minimum and maximum limits of the scanned mass ranges, ions source temperature and photomultiplier power). The analytes selected for the study were 2,3,5-trimethylpyrazine, 2-octanone, octanal, 2-pentyl-furan, 2,3,5,6-tetramethylpyrazine, and 2-nonanone e nonanal. The factors pointed out as important on the robustness of the system were photomultiplier power for quantitative analysis and lower limit of mass scanning range for qualitative analysis.
Resumo:
This in vitro study evaluated the tensile bond strength of glass fiber posts (Reforpost - Angelus-Brazil) cemented to root dentin with a resin cement (RelyX ARC - 3M/ESPE) associated with two different adhesive systems (Adper Single Bond - 3M/ESPE and Adper Scotchbond Multi Purpose (MP) Plus - 3M/ESPE), using the pull-out test. Twenty single-rooted human teeth with standardized root canals were randomly assigned to 2 groups (n=10): G1- etching with 37% phosphoric acid gel (3M/ESPE) + Adper Single Bond + #1 post (Reforpost - Angelus) + four #1 accessory posts (Reforpin - Angelus) + resin cement; G2- etching with 37% phosphoric acid gel + Adper Scotchbond MP Plus + #1 post + four #1 accessory posts + resin cement. The specimens were stored in distilled water at 37°C for 7 days and submitted to the pull-out test in a universal testing machine (EMIC) at a crosshead speed of 0.5 mm/min. The mean values of bond strength (kgf) and standard deviation were: G1- 29.163 ± 7.123; G2- 37.752 ±13.054. Statistical analysis (Student's t-test; a=0.05 showed no statistically significant difference (p<0.05) between the groups. Adhesive bonding failures between resin cement and root canal dentin surface were observed in both groups, with non-polymerized resin cement in the apical portion of the post space when Single Bond was used (G1). The type of adhesive system employed on the fiber post cementation did not influence the pull-out bond strength.