965 resultados para distributed development
Resumo:
Finnancial Support: CTC/INCTC, FAPESP, FUNDHERP, FINEP.
Resumo:
Abstract Background Tobacco and cannabis use are strongly interrelated, but current national and international cessation programs typically focus on one substance, and address the other substance either only marginally or not at all. This study aimed to identify the demand for, and describe the development and content of, the first integrative group cessation program for co-smokers of cigarettes and cannabis. Methods First, a preliminary study using expert interviews, user focus groups with (ex-)smokers, and an online survey was conducted to investigate the demand for, and potential content of, an integrative smoking cessation program (ISCP) for tobacco and cannabis co-smokers. This study revealed that both experts and co-smokers considered an ISCP to be useful but expected only modest levels of readiness for participation.Based on the findings of the preliminary study, an interdisciplinary expert team developed a course concept and a recruitment strategy. The developed group cessation program is based on current treatment techniques (such as motivational interviewing, cognitive behavioural therapy, and self-control training) and structured into six course sessions.The program was evaluated regarding its acceptability among participants and course instructors. Results Both the participants and course instructors evaluated the course positively. Participants and instructors especially appreciated the group discussions and the modules that were aimed at developing personal strategies that could be applied during simultaneous cessation of tobacco and cannabis, such as dealing with craving, withdrawal, and high-risk situations. Conclusions There is a clear demand for a double cessation program for co-users of cigarettes and cannabis, and the first group cessation program tailored for these users has been developed and evaluated for acceptability. In the near future, the feasibility of the program will be evaluated. Trial registration Current Controlled Trials ISRCTN15248397
Resumo:
Abstract Background Neonatal STZ treatment induces a state of mild hyperglycemia in adult rats that disrupts metabolism and maternal/fetal interactions. The aim of this study was investigate the effect of neonatal STZ treatment on the physical development, behavior, and reproductive function of female Wistar rats from infancy to adulthood. Methods At birth, litters were assigned either to a Control (subcutaneous (s.c.) citrate buffer, n = 10) or STZ group, (streptozotocin (STZ) - 100 mg/kg-sc, n = 6). Blood glucose levels were measured on postnatal days (PND) 35, 84 and 120. In Experiment 1 body weight, length and the appearance of developmental milestones such as eye and vaginal opening were monitored. To assess the relative contribution of the initial and long term effects of STZ treatment this group was subdivided based on blood glucose levels recorded on PND 120: STZ hyperglycemic (between 120 and 300 mg/dl) and STZ normoglycemic (under 120 mg/dl). Behavioral activity was assessed in an open field on PND 21 and 75. In Experiment 2 estrous cyclicity, sexual behavior and circulating gonadotropin, ovarian steroid, and insulin levels were compared between control and STZ-hyperglycemic rats. In all measures the litter was the experimental unit. Parametric data were analyzed using one-way or, where appropriate, two-way ANOVA and significant effects were investigated using Tukey’s post hoc test. Fisher’s exact test was employed when data did not satisfy the assumption of normality e.g. presence of urine and fecal boli on the open field between groups. Statistical significance was set at p < 0.05 for all data. Results As expected neonatal STZ treatment caused hyperglycemia and hypoinsulinemia in adulthood. STZ-treated pups also showed a temporary reduction in growth rate that probably reflected the early loss of circulating insulin. Hyperglycemic rats also exhibited a reduction in locomotor and exploratory behavior in the open field. Mild hyperglycemia did not impair gonadotropin levels or estrous cylicity but ovarian steroid concentrations were altered. Conclusions In female Wistar rats, neonatal STZ treatment impairs growth in infancy and results in mild hyperglycemia/hypoinsulinemia in adulthood that is associated with changes in the response to a novel environment and altered ovarian steroid hormone levels.
Resumo:
Rodent gastric mucosa grows and differentiates during suckling-weaning transition. Among the molecules in rat milk, EGF and TGFβ are important peptides in the control of cell proliferation, and together with TGFα, they are also produced by submandibular glands. We aimed to determine the effect of saliva and milk on epithelial cell proliferation in the stomach of rat pups. We also examined the distribution of TGFα in the gastric mucosa after sialoadenectomy (SIALO) and fasting in order to determine whether this growth factor is affected by the deprivation of molecules derived from saliva and milk. SIALO was performed at 14 days and fasting was induced 3 days later. Cell proliferation was evaluated through metaphasic index and TGFα was detected by immunohistochemistry. We observed that whereas SIALO did not alter cell division, since the metaphasic index (MI) was unchanged, fasting stimulated cell proliferation (P < 0.05). After SIALO and fasting, MI was reduced when compared to the fasted group (P < 0.05). We found that TGFα is distributed along gastric gland and SIALO did not interfere in the localization and number of immunolabeled cells, but fasting increased their density when compared to the control (P < 0.05). The association of SIALO and fasting reduced TGFα immunostaining (P < 0.05). Therefore, during fasting, high MI was parallel to increased TGFα in gastric epithelium, but interestingly, this effect was found only in the presence of submandibular glands. We suggest that during suckling, peptides derived from saliva and milk are important to regulate gastric growth.
Resumo:
Recent progress in microelectronic and wireless communications have enabled the development of low cost, low power, multifunctional sensors, which has allowed the birth of new type of networks named wireless sensor networks (WSNs). The main features of such networks are: the nodes can be positioned randomly over a given field with a high density; each node operates both like sensor (for collection of environmental data) as well as transceiver (for transmission of information to the data retrieval); the nodes have limited energy resources. The use of wireless communications and the small size of nodes, make this type of networks suitable for a large number of applications. For example, sensor nodes can be used to monitor a high risk region, as near a volcano; in a hospital they could be used to monitor physical conditions of patients. For each of these possible application scenarios, it is necessary to guarantee a trade-off between energy consumptions and communication reliability. The thesis investigates the use of WSNs in two possible scenarios and for each of them suggests a solution that permits to solve relating problems considering the trade-off introduced. The first scenario considers a network with a high number of nodes deployed in a given geographical area without detailed planning that have to transmit data toward a coordinator node, named sink, that we assume to be located onboard an unmanned aerial vehicle (UAV). This is a practical example of reachback communication, characterized by the high density of nodes that have to transmit data reliably and efficiently towards a far receiver. It is considered that each node transmits a common shared message directly to the receiver onboard the UAV whenever it receives a broadcast message (triggered for example by the vehicle). We assume that the communication channels between the local nodes and the receiver are subject to fading and noise. The receiver onboard the UAV must be able to fuse the weak and noisy signals in a coherent way to receive the data reliably. It is proposed a cooperative diversity concept as an effective solution to the reachback problem. In particular, it is considered a spread spectrum (SS) transmission scheme in conjunction with a fusion center that can exploit cooperative diversity, without requiring stringent synchronization between nodes. The idea consists of simultaneous transmission of the common message among the nodes and a Rake reception at the fusion center. The proposed solution is mainly motivated by two goals: the necessity to have simple nodes (to this aim we move the computational complexity to the receiver onboard the UAV), and the importance to guarantee high levels of energy efficiency of the network, thus increasing the network lifetime. The proposed scheme is analyzed in order to better understand the effectiveness of the approach presented. The performance metrics considered are both the theoretical limit on the maximum amount of data that can be collected by the receiver, as well as the error probability with a given modulation scheme. Since we deal with a WSN, both of these performance are evaluated taking into consideration the energy efficiency of the network. The second scenario considers the use of a chain network for the detection of fires by using nodes that have a double function of sensors and routers. The first one is relative to the monitoring of a temperature parameter that allows to take a local binary decision of target (fire) absent/present. The second one considers that each node receives a decision made by the previous node of the chain, compares this with that deriving by the observation of the phenomenon, and transmits the final result to the next node. The chain ends at the sink node that transmits the received decision to the user. In this network the goals are to limit throughput in each sensor-to-sensor link and minimize probability of error at the last stage of the chain. This is a typical scenario of distributed detection. To obtain good performance it is necessary to define some fusion rules for each node to summarize local observations and decisions of the previous nodes, to get a final decision that it is transmitted to the next node. WSNs have been studied also under a practical point of view, describing both the main characteristics of IEEE802:15:4 standard and two commercial WSN platforms. By using a commercial WSN platform it is realized an agricultural application that has been tested in a six months on-field experimentation.
Resumo:
Mainstream hardware is becoming parallel, heterogeneous, and distributed on every desk, every home and in every pocket. As a consequence, in the last years software is having an epochal turn toward concurrency, distribution, interaction which is pushed by the evolution of hardware architectures and the growing of network availability. This calls for introducing further abstraction layers on top of those provided by classical mainstream programming paradigms, to tackle more effectively the new complexities that developers have to face in everyday programming. A convergence it is recognizable in the mainstream toward the adoption of the actor paradigm as a mean to unite object-oriented programming and concurrency. Nevertheless, we argue that the actor paradigm can only be considered a good starting point to provide a more comprehensive response to such a fundamental and radical change in software development. Accordingly, the main objective of this thesis is to propose Agent-Oriented Programming (AOP) as a high-level general purpose programming paradigm, natural evolution of actors and objects, introducing a further level of human-inspired concepts for programming software systems, meant to simplify the design and programming of concurrent, distributed, reactive/interactive programs. To this end, in the dissertation first we construct the required background by studying the state-of-the-art of both actor-oriented and agent-oriented programming, and then we focus on the engineering of integrated programming technologies for developing agent-based systems in their classical application domains: artificial intelligence and distributed artificial intelligence. Then, we shift the perspective moving from the development of intelligent software systems, toward general purpose software development. Using the expertise maturated during the phase of background construction, we introduce a general-purpose programming language named simpAL, which founds its roots on general principles and practices of software development, and at the same time provides an agent-oriented level of abstraction for the engineering of general purpose software systems.
Resumo:
A Smart City is a high-performance urban context, where citizens live independently and are more aware of the surrounding opportunities, thanks to forward-looking development of economy politics, governance, mobility and environment. ICT infrastructures play a key-role in this new research field being also a mean for society to allow new ideas to prosper and new, more efficient approaches to be developed. The aim of this work is to research and develop novel solutions, here called smart services, in order to solve several upcoming problems and known issues in urban areas and more in general in the modern society context. A specific focus is posed on smart governance and on privacy issues which have been arisen in the cellular age.
Resumo:
Beside the traditional paradigm of "centralized" power generation, a new concept of "distributed" generation is emerging, in which the same user becomes pro-sumer. During this transition, the Energy Storage Systems (ESS) can provide multiple services and features, which are necessary for a higher quality of the electrical system and for the optimization of non-programmable Renewable Energy Source (RES) power plants. A ESS prototype was designed, developed and integrated into a renewable energy production system in order to create a smart microgrid and consequently manage in an efficient and intelligent way the energy flow as a function of the power demand. The produced energy can be introduced into the grid, supplied to the load directly or stored in batteries. The microgrid is composed by a 7 kW wind turbine (WT) and a 17 kW photovoltaic (PV) plant are part of. The load is given by electrical utilities of a cheese factory. The ESS is composed by the following two subsystems, a Battery Energy Storage System (BESS) and a Power Control System (PCS). With the aim of sizing the ESS, a Remote Grid Analyzer (RGA) was designed, realized and connected to the wind turbine, photovoltaic plant and the switchboard. Afterwards, different electrochemical storage technologies were studied, and taking into account the load requirements present in the cheese factory, the most suitable solution was identified in the high temperatures salt Na-NiCl2 battery technology. The data acquisition from all electrical utilities provided a detailed load analysis, indicating the optimal storage size equal to a 30 kW battery system. Moreover a container was designed and realized to locate the BESS and PCS, meeting all the requirements and safety conditions. Furthermore, a smart control system was implemented in order to handle the different applications of the ESS, such as peak shaving or load levelling.
Resumo:
This thesis is focused on Smart Grid applications in medium voltage distribution networks. For the development of new applications it appears useful the availability of simulation tools able to model dynamic behavior of both the power system and the communication network. Such a co-simulation environment would allow the assessment of the feasibility of using a given network technology to support communication-based Smart Grid control schemes on an existing segment of the electrical grid and to determine the range of control schemes that different communications technologies can support. For this reason, is presented a co-simulation platform that has been built by linking the Electromagnetic Transients Program Simulator (EMTP v3.0) with a Telecommunication Network Simulator (OPNET-Riverbed v18.0). The simulator is used to design and analyze a coordinate use of Distributed Energy Resources (DERs) for the voltage/var control (VVC) in distribution network. This thesis is focused control structure based on the use of phase measurement units (PMUs). In order to limit the required reinforcements of the communication infrastructures currently adopted by Distribution Network Operators (DNOs), the study is focused on leader-less MAS schemes that do not assign special coordinating rules to specific agents. Leader-less MAS are expected to produce more uniform communication traffic than centralized approaches that include a moderator agent. Moreover, leader-less MAS are expected to be less affected by limitations and constraint of some communication links. The developed co-simulator has allowed the definition of specific countermeasures against the limitations of the communication network, with particular reference to the latency and loss and information, for both the case of wired and wireless communication networks. Moreover, the co-simulation platform has bee also coupled with a mobility simulator in order to study specific countermeasures against the negative effects on the medium voltage/current distribution network caused by the concurrent connection of electric vehicles.
Resumo:
This thesis explores system performance for reconfigurable distributed systems and provides an analytical model for determining throughput of theoretical systems based on the OpenSPARC FPGA Board and the SIRC Communication Framework. This model was developed by studying a small set of variables that together determine a system¿s throughput. The importance of this model is in assisting system designers to make decisions as to whether or not to commit to designing a reconfigurable distributed system based on the estimated performance and hardware costs. Because custom hardware design and distributed system design are both time consuming and costly, it is important for designers to make decisions regarding system feasibility early in the development cycle. Based on experimental data the model presented in this paper shows a close fit with less than 10% experimental error on average. The model is limited to a certain range of problems, but it can still be used given those limitations and also provides a foundation for further development of modeling reconfigurable distributed systems.
Resumo:
Differential cyp19 aromatase expression during development leads to sexual dimorphisms in the mammalian brain. Whether this is also true for fish is unknown. The aim of the current study has been to follow the expression of the brain-specific aromatase cyp19a2 in the brains of sexually differentiating zebrafish. To assess the role of cyp19a2 in the zebrafish brain during gonadal differentiation, we used quantitative reverse transcriptase-polymerase chain reaction and immunohistochemistry to detect differences in the transcript or protein levels and/or expression pattern in juvenile fish, histology to monitor the gonadal status, and double immunofluorescence with neuronal or radial glial markers to characterize aromatase-positive cells. Our data show that cyp19a2 expression levels during zebrafish sexual differentiation cannot be assigned to a particular sex; the expression pattern in the brain is similar in both sexes and aromatase-positive cells appear to be mostly of radial glial nature.
Resumo:
Regional flood frequency techniques are commonly used to estimate flood quantiles when flood data is unavailable or the record length at an individual gauging station is insufficient for reliable analyses. These methods compensate for limited or unavailable data by pooling data from nearby gauged sites. This requires the delineation of hydrologically homogeneous regions in which the flood regime is sufficiently similar to allow the spatial transfer of information. It is generally accepted that hydrologic similarity results from similar physiographic characteristics, and thus these characteristics can be used to delineate regions and classify ungauged sites. However, as currently practiced, the delineation is highly subjective and dependent on the similarity measures and classification techniques employed. A standardized procedure for delineation of hydrologically homogeneous regions is presented herein. Key aspects are a new statistical metric to identify physically discordant sites, and the identification of an appropriate set of physically based measures of extreme hydrological similarity. A combination of multivariate statistical techniques applied to multiple flood statistics and basin characteristics for gauging stations in the Southeastern U.S. revealed that basin slope, elevation, and soil drainage largely determine the extreme hydrological behavior of a watershed. Use of these characteristics as similarity measures in the standardized approach for region delineation yields regions which are more homogeneous and more efficient for quantile estimation at ungauged sites than those delineated using alternative physically-based procedures typically employed in practice. The proposed methods and key physical characteristics are also shown to be efficient for region delineation and quantile development in alternative areas composed of watersheds with statistically different physical composition. In addition, the use of aggregated values of key watershed characteristics was found to be sufficient for the regionalization of flood data; the added time and computational effort required to derive spatially distributed watershed variables does not increase the accuracy of quantile estimators for ungauged sites. This dissertation also presents a methodology by which flood quantile estimates in Haiti can be derived using relationships developed for data rich regions of the U.S. As currently practiced, regional flood frequency techniques can only be applied within the predefined area used for model development. However, results presented herein demonstrate that the regional flood distribution can successfully be extrapolated to areas of similar physical composition located beyond the extent of that used for model development provided differences in precipitation are accounted for and the site in question can be appropriately classified within a delineated region.
Resumo:
To mitigate greenhouse gas (GHG) emissions and reduce U.S. dependence on imported oil, the United States (U.S.) is pursuing several options to create biofuels from renewable woody biomass (hereafter referred to as “biomass”). Because of the distributed nature of biomass feedstock, the cost and complexity of biomass recovery operations has significant challenges that hinder increased biomass utilization for energy production. To facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization and tapping unused forest residues, it is proposed to develop biofuel supply chain models based on optimization and simulation approaches. The biofuel supply chain is structured around four components: biofuel facility locations and sizes, biomass harvesting/forwarding, transportation, and storage. A Geographic Information System (GIS) based approach is proposed as a first step for selecting potential facility locations for biofuel production from forest biomass based on a set of evaluation criteria, such as accessibility to biomass, railway/road transportation network, water body and workforce. The development of optimization and simulation models is also proposed. The results of the models will be used to determine (1) the number, location, and size of the biofuel facilities, and (2) the amounts of biomass to be transported between the harvesting areas and the biofuel facilities over a 20-year timeframe. The multi-criteria objective is to minimize the weighted sum of the delivered feedstock cost, energy consumption, and GHG emissions simultaneously. Finally, a series of sensitivity analyses will be conducted to identify the sensitivity of the decisions, such as the optimal site selected for the biofuel facility, to changes in influential parameters, such as biomass availability and transportation fuel price. Intellectual Merit The proposed research will facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization in the renewable biofuel industry. The GIS-based facility location analysis considers a series of factors which have not been considered simultaneously in previous research. Location analysis is critical to the financial success of producing biofuel. The modeling of woody biomass supply chains using both optimization and simulation, combing with the GIS-based approach as a precursor, have not been done to date. The optimization and simulation models can help to ensure the economic and environmental viability and sustainability of the entire biofuel supply chain at both the strategic design level and the operational planning level. Broader Impacts The proposed models for biorefineries can be applied to other types of manufacturing or processing operations using biomass. This is because the biomass feedstock supply chain is similar, if not the same, for biorefineries, biomass fired or co-fired power plants, or torrefaction/pelletization operations. Additionally, the research results of this research will continue to be disseminated internationally through publications in journals, such as Biomass and Bioenergy, and Renewable Energy, and presentations at conferences, such as the 2011 Industrial Engineering Research Conference. For example, part of the research work related to biofuel facility identification has been published: Zhang, Johnson and Sutherland [2011] (see Appendix A). There will also be opportunities for the Michigan Tech campus community to learn about the research through the Sustainable Future Institute.
Resumo:
Skeletal muscle force evaluation is difficult to implement in a clinical setting. Muscle force is typically assessed through either manual muscle testing, isokinetic/isometric dynamometry, or electromyography (EMG). Manual muscle testing is a subjective evaluation of a patient’s ability to move voluntarily against gravity and to resist force applied by an examiner. Muscle testing using dynamometers adds accuracy by quantifying functional mechanical output of a limb. However, like manual muscle testing, dynamometry only provides estimates of the joint moment. EMG quantifies neuromuscular activation signals of individual muscles, and is used to infer muscle function. Despite the abundance of work performed to determine the degree to which EMG signals and muscle forces are related, the basic problem remains that EMG cannot provide a quantitative measurement of muscle force. Intramuscular pressure (IMP), the pressure applied by muscle fibers on interstitial fluid, has been considered as a correlate for muscle force. Numerous studies have shown that an approximately linear relationship exists between IMP and muscle force. A microsensor has recently been developed that is accurate, biocompatible, and appropriately sized for clinical use. While muscle force and pressure have been shown to be correlates, IMP has been shown to be non-uniform within the muscle. As it would not be practicable to experimentally evaluate how IMP is distributed, computational modeling may provide the means to fully evaluate IMP generation in muscles of various shapes and operating conditions. The work presented in this dissertation focuses on the development and validation of computational models of passive skeletal muscle and the evaluation of their performance for prediction of IMP. A transversly isotropic, hyperelastic, and nearly incompressible model will be evaluated along with a poroelastic model.
Resumo:
The number of record-breaking events expected to occur in a strictly stationary time-series depends only on the number of values in the time-series, regardless of distribution. This holds whether the events are record-breaking highs or lows and whether we count from past to present or present to past. However, these symmetries are broken in distinct ways by trends in the mean and variance. We define indices that capture this information and use them to detect weak trends from multiple time-series. Here, we use these methods to answer the following questions: (1) Is there a variability trend among globally distributed surface temperature time-series? We find a significant decreasing variability over the past century for the Global Historical Climatology Network (GHCN). This corresponds to about a 10% change in the standard deviation of inter-annual monthly mean temperature distributions. (2) How are record-breaking high and low surface temperatures in the United States affected by time period? We investigate the United States Historical Climatology Network (USHCN) and find that the ratio of record-breaking highs to lows in 2006 increases as the time-series extend further into the past. When we consider the ratio as it evolves with respect to a fixed start year, we find it is strongly correlated with the ensemble mean. We also compare the ratios for USHCN and GHCN (minus USHCN stations). We find the ratios grow monotonically in the GHCN data set, but not in the USHCN data set. (3) Do we detect either mean or variance trends in annual precipitation within the United States? We find that the total annual and monthly precipitation in the United States (USHCN) has increased over the past century. Evidence for a trend in variance is inconclusive.