949 resultados para Time constraints
Exploring socioeconomic impacts of forest based mitigation projects: Lessons from Brazil and Bolivia
Resumo:
This paper aims to contribute new insights globally and regionally on how carbon forest mitigation contributes to sustainable development in South America. Carbon finance has emerged as a potential policy option to tackling global climate change, degradation of forests, and social development in poor countries. This paper focuses on evaluating the socioeconomic impacts of a set of forest based mitigation pilot projects that emerged under the United Nations Framework Convention on Climate Change. The paper reviews research conducted in 2001–2002, drawing from empirical data from four pilot projects, derived from qualitative stakeholder interviews, and complemented by policy documents and literature. Of the four projects studied three are located in frontier areas, where there are considerable pressures for conversion of standing forest to agriculture. In this sense, forest mitigation projects have a substantial role to play in the region. Findings suggest however, that all four projects have experienced cumbersome implementation processes specifically, due to weak social objectives, poor communication, as well as time constraints. In three out of four cases, stakeholders highlighted limited local acceptance at the implementation stages. In the light of these findings, we discuss opportunities for implementation of future forest based mitigation projects in the land use sector.
Resumo:
Background Appropriately conducted adaptive designs (ADs) offer many potential advantages over conventional trials. They make better use of accruing data, potentially saving time, trial participants, and limited resources compared to conventional, fixed sample size designs. However, one can argue that ADs are not implemented as often as they should be, particularly in publicly funded confirmatory trials. This study explored barriers, concerns, and potential facilitators to the appropriate use of ADs in confirmatory trials among key stakeholders. Methods We conducted three cross-sectional, online parallel surveys between November 2014 and January 2015. The surveys were based upon findings drawn from in-depth interviews of key research stakeholders, predominantly in the UK, and targeted Clinical Trials Units (CTUs), public funders, and private sector organisations. Response rates were as follows: 30(55 %) UK CTUs, 17(68 %) private sector, and 86(41 %) public funders. A Rating Scale Model was used to rank barriers and concerns in order of perceived importance for prioritisation. Results Top-ranked barriers included the lack of bridge funding accessible to UK CTUs to support the design of ADs, limited practical implementation knowledge, preference for traditional mainstream designs, difficulties in marketing ADs to key stakeholders, time constraints to support ADs relative to competing priorities, lack of applied training, and insufficient access to case studies of undertaken ADs to facilitate practical learning and successful implementation. Associated practical complexities and inadequate data management infrastructure to support ADs were reported as more pronounced in the private sector. For funders of public research, the inadequate description of the rationale, scope, and decision-making criteria to guide the planned AD in grant proposals by researchers were all viewed as major obstacles. Conclusions There are still persistent and important perceptions of individual and organisational obstacles hampering the use of ADs in confirmatory trials research. Stakeholder perceptions about barriers are largely consistent across sectors, with a few exceptions that reflect differences in organisations’ funding structures, experiences and characterisation of study interventions. Most barriers appear connected to a lack of practical implementation knowledge and applied training, and limited access to case studies to facilitate practical learning. Keywords: Adaptive designs; flexible designs; barriers; surveys; confirmatory trials; Phase 3; clinical trials; early stopping; interim analyses
Resumo:
Syfte: Att undersöka hur sjuksköterskor inom särskilt boende resonerar kring kvalitetsuppföljningar och dess eventuella konsekvenser för omvårdnaden. Metod: Semistrukturerade intervjuer efter öppen intervjuguide med sex sjuksköterskor. Kvalitativ innehållsanalys enligt Graneheim och Lundmans metod. Huvudresultat: I vilken grad kvalitetsregistren och kvalitetsuppföljningarna integreras i omvårdnadsarbetet och dess utveckling är centralt för om dessa uppfattas som stöd eller hinder för god kvalité i omvårdnaden. Dubbel dokumentation bidrar till att sjuksköterskorna omprioriterar arbetstiden och arbetar mer konsultativt och administrativt. Detta minskar tiden för omvårdnadsobservationer och handleding av omvårdnadspersonal samt gör att kvalitetsregistreringar snarast uppfattas som ett hinder. Sjuksköterskorna använde sin professionella kunskap och kliniska erfarenhet i högre grad än registerdata vid omvårdnadsbedömningar. Dessa sågs som alltför komplexa för att kunna fångas i kryssfrågeformulär. Mer kliniska observationer efterfrågas i kvalitetsuppföljningarna för ökad medvetenhet om hög arbetsbelastning och dess eventuella konsekvenser samt för att garantera de boende god omvårdnadskvalité. Konklusion: Sjuksköterskorna upplever att de arbetar under svår tidspress. Tiden anges som essentiell för vilken omvårdnadskvalité som erbjuds. Vid beslut om registreringar av kvalitetsindikatorer bör sjuksköterskornas totala arbetsbörda beaktas. Registreringarna bör integreras i befintliga journalsystem så att sjuksköterskornas omprioriteringar inte får negativa konsekvenser för omvårdnadskvaliten.
Resumo:
The English language is widely used throughout the world and has become a core subject in many countries, especially for students in the upper elementary classroom. While textbooks have been the preferred EFL teaching method for a long time, this belief has seemingly changed within the last few years. Therefore, this study looks at what prior research says about the use of authentic texts in the EFL upper elementary classroom with an aim to answer research questions on how teachers can work with authentic texts, what the potential benefits of using authentic texts are and what teachers and students say about the use of authentic texts in the EFL classroom. While this thesis is written from a Swedish perspective, it is recognized that many countries teach EFL. Therefore, international results have also been taken into consideration and seven previous research studies have been analyzed in order to gain a better understanding of the use of authentic texts in the EFL classroom. Results indicate that the use of authentic texts is beneficial in teaching EFL. However, many teachers are still reluctant to use these, mainly because of time constraints and the belief that such texts are too difficult for their students. Since these findings are mainly focused on areas outside of Sweden, additional research is needed before conclusions can be drawn on the use of authentic texts in the Swedish upper elementary EFL classroom.
Resumo:
Dentro da literatura de Comportamento do Consumidor e Teoria da Decisão existe considerável corpo teórico que analisa sentimentos negativos e reações adversas no processo decisório de compras de produtos de alto e baixo envolvimento. Vários fenômenos são identificados como negativos no processo, principalmente a Confusão do Consumidor, que compreende três dimensões: i) muitas informações similares sobre produtos, ii) muitas informações sobre diferentes produtos e iii) informações falsas e ambíguas. Tal fenômeno, no entanto, parece ser moderado por um conjunto de variáveis, como o Envolvimento, a Experiência e a Restrição de Tempo (moderadoras da relação entre Confusão do Consumidor e Intenção de Compra). Este fato foi identificado através de entrevistas em profundidade. Os resultados das entrevistas permitiram identificar as variáveis moderadoras, assim como a existência do fenômeno e sua relação com a decisão final de compra. Na segunda fase da pesquisa, supõe-se que indivíduos com baixo Envolvimento e Restrição de Tempo possuam uma propensão maior à confusão. No Estudo 2 foram utilizados como moderadores o Envolvimento e a Restrição de Tempo, ambos manipulados por instrução, sendo as variáveis dependentes a Intenção de Compra e a Confusão do Consumidor. Os resultados do Estudo 2 permitiram inferir que existem diferenças significativas entre os grupos, quando analisada a variável Confusão do Consumidor, mas, em alguns grupos, a Intenção de Compra não era significativamente diferente. No Estudo 3 foram manipuladas a Experiência (forte e fraca) e a Confusão do Consumidor, sendo a variável dependente a Intenção de Compra. Os resultados do Estudo 3 também permitiram inferir que existem diferenças significativas entre os grupos na Intenção de compra, quando consideradas baixa ou alta confusão, assim como Experiência forte ou fraca. Na última fase da pesquisa foram destacadas as estratégias dos consumidores para lidar com o fenômeno Confusão do Consumidor. Tais estratégias, muitas vezes, são mediadoras de comportamentos posteriores, como a compra do produto. No Estudo 4 manipulou-se a Confusão do Consumidor em duas de suas dimensões. Foi possível destacar a preponderância da estratégia por busca de informações e postergação da decisão, quando o consumidor se depara com situações confusas.
Resumo:
This paper presents new evidence of the causal effect of family size on child quality in a developing-country context. We estimate the impact of family size on child labor and educational outcomes among Brazilian children and young adults by exploring the exogenous variation of family size driven by the presence of twins in the family. Using the Brazilian Census data for 1991, we nd that the exogenous increase in family size is positively related to labor force participation for boys and girls and to household chores for young women. We also and negative e ects on educational outcomes for boys and girls and negative impacts on human capital formation for young female adults. Moreover, we obtain suggestive evidence that credit and time constraints faced by poor families may explain the findings.
Resumo:
Develop software is still a risky business. After 60 years of experience, this community is still not able to consistently build Information Systems (IS) for organizations with predictable quality, within previously agreed budget and time constraints. Although software is changeable we are still unable to cope with the amount and complexity of change that organizations demand for their IS. To improve results, developers followed two alternatives: Frameworks that increase productivity but constrain the flexibility of possible solutions; Agile ways of developing software that keep flexibility with less upfront commitments. With strict frameworks, specific hacks have to be put in place to get around the framework construction options. In time this leads to inconsistent architectures that are harder to maintain due to incomplete documentation and human resources turnover. The main goals of this work is to create a new way to develop flexible IS for organizations, using web technologies, in a faster, better and cheaper way that is more suited to handle organizational change. To do so we propose an adaptive object model that uses a new ontology for data and action with strict normalizing rules. These rules should bound the effects of changes that can be better tested and therefore corrected. Interfaces are built with templates of resources that can be reused and extended in a flexible way. The “state of the world” for each IS is determined by all production and coordination acts that agents performed over time, even those performed by external systems. When bugs are found during maintenance, their past cascading effects can be checked through simulation, re-running the log of transaction acts over time and checking results with previous records. This work implements a prototype with part of the proposed system in order to have a preliminary assessment its feasibility and limitations.
Resumo:
Simulations based on cognitively rich agents can become a very intensive computing task, especially when the simulated environment represents a complex system. This situation becomes worse when time constraints are present. This kind of simulations would benefit from a mechanism that improves the way agents perceive and react to changes in these types of environments. In other worlds, an approach to improve the efficiency (performance and accuracy) in the decision process of autonomous agents in a simulation would be useful. In complex environments, and full of variables, it is possible that not every information available to the agent is necessary for its decision-making process, depending indeed, on the task being performed. Then, the agent would need to filter the coming perceptions in the same as we do with our attentions focus. By using a focus of attention, only the information that really matters to the agent running context are perceived (cognitively processed), which can improve the decision making process. The architecture proposed herein presents a structure for cognitive agents divided into two parts: 1) the main part contains the reasoning / planning process, knowledge and affective state of the agent, and 2) a set of behaviors that are triggered by planning in order to achieve the agent s goals. Each of these behaviors has a runtime dynamically adjustable focus of attention, adjusted according to the variation of the agent s affective state. The focus of each behavior is divided into a qualitative focus, which is responsible for the quality of the perceived data, and a quantitative focus, which is responsible for the quantity of the perceived data. Thus, the behavior will be able to filter the information sent by the agent sensors, and build a list of perceived elements containing only the information necessary to the agent, according to the context of the behavior that is currently running. Based on the human attention focus, the agent is also dotted of a affective state. The agent s affective state is based on theories of human emotion, mood and personality. This model serves as a basis for the mechanism of continuous adjustment of the agent s attention focus, both the qualitative and the quantative focus. With this mechanism, the agent can adjust its focus of attention during the execution of the behavior, in order to become more efficient in the face of environmental changes. The proposed architecture can be used in a very flexibly way. The focus of attention can work in a fixed way (neither the qualitative focus nor the quantitaive focus one changes), as well as using different combinations for the qualitative and quantitative foci variation. The architecture was built on a platform for BDI agents, but its design allows it to be used in any other type of agents, since the implementation is made only in the perception level layer of the agent. In order to evaluate the contribution proposed in this work, an extensive series of experiments were conducted on an agent-based simulation over a fire-growing scenario. In the simulations, the agents using the architecture proposed in this work are compared with similar agents (with the same reasoning model), but able to process all the information sent by the environment. Intuitively, it is expected that the omniscient agent would be more efficient, since they can handle all the possible option before taking a decision. However, the experiments showed that attention-focus based agents can be as efficient as the omniscient ones, with the advantage of being able to solve the same problems in a significantly reduced time. Thus, the experiments indicate the efficiency of the proposed architecture
Resumo:
Background: Medical students engage in curricular and extracurricular activities, including undergraduate research (UR). The advantages, difficulties and motivations for medical students pursuing research activities during their studies have rarely been addressed. In Brazil, some medical schools have included undergraduate research into their curriculum. The present study aimed to understand the reality of scientific practice among medical students at a well-established Brazilian medical school, analyzing this context from the students' viewpoint.Methods: A cross-sectional survey based on a questionnaire applied to students from years one to six enrolled in an established Brazilian medical school that currently has no curricular UR program.Results: The questionnaire was answered by 415 students, 47.2% of whom were involved in research activities, with greater participation in UR in the second half of the course. Independent of student involvement in research activities, time constraints were cited as the main obstacle to participation. Among students not involved in UR, 91.1% said they favored its inclusion in the curriculum, since this would facilitate the development of such activity. This approach could signify an approximation between the axes of teaching and research. Among students who had completed at least one UR project, 87.7% said they would recommend the activity to students entering the course.Conclusion: Even without an undergraduate research program, students of this medical school report strong involvement in research activities, but discussion of the difficulties inherent in its practice is important to future developments.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Gegenstand der vorliegenden Arbeit ist die Überarbeitung der Richtlinie 89/552/EWG des Rates zur Koordinierung bestimmter Rechts- und Verwaltungsvorschriften der Mitgliedstaaten über die Ausübung der Fernsehtätigkeit, welche aus praktikablen Gründen meist als „(EG-)Fernsehrichtlinie“ bezeichnet wird. Sie bildet den Eckpfeiler der audiovisuellen Politik der EU. Seit Erlass der Fernsehrichtlinie im Jahre 1989 bewirkt der technologische Fortschritt jedoch zunehmend enorme Veränderungen nicht nur im Bereich des klassischen Fernsehens, sondern auch und vor allem im Bereich der neuen Medien. Ausgangspunkt hierfür ist die Verbesserung der Digitaltechnologie, die ihrerseits wiederum technische Konvergenzprozesse begünstigt. Diese Entwicklungen führen nicht nur zu einer Vervielfachung von Übertragungskapazitäten und –techniken, sondern ermöglichen neben neuen Formen audiovisueller Angebote auch die Entstehung neuer Dienste. Unsere Medienlandschaft steht vor „epochalen Umbrüchen“. Im Hinblick auf diese Vorgänge wird seit geraumer Zeit eine Überarbeitung der EG-Fernsehrichtlinie angestrebt, um dem technologischen Fortschritt auch „regulatorisch“ gerecht werden zu können. Diesem Überarbeitungsprozess möchte sich die vorliegende Arbeit widmen, indem sie die Fernsehrichtlinie in einem ersten Teil sowohl inhaltlich wie auch hinsichtlich ihrer Entstehungsgeschichte und der zu ihr ergangenen EuGH-Entscheidungen erläutert. Anschließend werden alle Überarbeitungsvorgänge der Fernsehrichtlinie seit 1997 dargestellt, um sodann die aktuellen Reformansätze analysieren und bewerten zu können. Aus zeitlichen Gründen (der neue Richtlinienvorschlag der Kommission vom 13. Dezember 2005 wurde ca. 2 Wochen vor dem Abgabetermin der Arbeit verabschiedet) sind die Ausführungen zum Entwurf der neuen „Richtlinie über audiovisuelle Mediendienste“ allerdings relativ knapp gehalten.
Resumo:
We deal with five problems arising in the field of logistics: the Asymmetric TSP (ATSP), the TSP with Time Windows (TSPTW), the VRP with Time Windows (VRPTW), the Multi-Trip VRP (MTVRP), and the Two-Echelon Capacitated VRP (2E-CVRP). The ATSP requires finding a lest-cost Hamiltonian tour in a digraph. We survey models and classical relaxations, and describe the most effective exact algorithms from the literature. A survey and analysis of the polynomial formulations is provided. The considered algorithms and formulations are experimentally compared on benchmark instances. The TSPTW requires finding, in a weighted digraph, a least-cost Hamiltonian tour visiting each vertex within a given time window. We propose a new exact method, based on new tour relaxations and dynamic programming. Computational results on benchmark instances show that the proposed algorithm outperforms the state-of-the-art exact methods. In the VRPTW, a fleet of identical capacitated vehicles located at a depot must be optimally routed to supply customers with known demands and time window constraints. Different column generation bounding procedures and an exact algorithm are developed. The new exact method closed four of the five open Solomon instances. The MTVRP is the problem of optimally routing capacitated vehicles located at a depot to supply customers without exceeding maximum driving time constraints. Two set-partitioning-like formulations of the problem are introduced. Lower bounds are derived and embedded into an exact solution method, that can solve benchmark instances with up to 120 customers. The 2E-CVRP requires designing the optimal routing plan to deliver goods from a depot to customers by using intermediate depots. The objective is to minimize the sum of routing and handling costs. A new mathematical formulation is introduced. Valid lower bounds and an exact method are derived. Computational results on benchmark instances show that the new exact algorithm outperforms the state-of-the-art exact methods.
Resumo:
Domoinsäure ist ein von mehreren Arten mariner Kieselalgen der Gattung Pseudonitzschia produziertes Toxin, welches während einer Algenblüte in Molluscen wie z.B. der Miesmuschel Mytilus sp. akkumuliert werden kann. Beim Verzehr solch kontaminierter Muscheln können sowohl beim Menschen als auch bei Tieren erhebliche Vergiftungserscheinungen auftreten, die von Übelkeit, Kopfschmerzen und Orientierungsstörungen bis hin zum Verlust des Kurzzeitgedächtnisses (daher auch als amnesic shellfish poisoning bekannt) reichen und in einigen Fällen tödlich enden. rnDie heute gängigen Methoden zur Detektion von Domoinsäure in Muschelgewebe wie Flüssigkeitschromatographie und Maus-Bioassay sind zeit- und kostenintensiv bzw. in Anbetracht einer Verbesserung des Tierschutzes aus ethischer Sicht nicht zu vertreten. Immunologische Testsysteme stellen eine erstrebenswerte Alternative dar, da sie sich durch eine vergleichsweise einfache Handhabung, hohe Selektivität und Reproduzierbarkeit auszeichnen.rnDas Ziel der vorliegenden Arbeit war es, ein solches immunologisches Testsystem zur Detektion von Domoinsäure zu entwickeln. Hierfür wurden zunächst Antikörper gegen Domoinsäure gewonnen, wofür das Toxin wiederum als erstes über die Carbodiimid-Methode an das Trägerprotein keyhole limpet hemocyanin (KLH) gekoppelt wurde, um eine Immunantwort auslösen zu können. Kaninchen und Mäuse wurden mit KLH-DO-Konjugaten nach vorgegebenen Immunisierungsschemata immunisiert. Nach vier Blutabnahmen zeigte das polyklonale Kaninchenantiserum eine ausreichend hohe Sensitivität zum Antigen; das nachfolgende Detektionssystem wurde mit Hilfe dieses polyklonalen Antikörpers aufgebaut. Zwar ist es gegen Ende der Arbeit auch gelungen, einen spezifischen monoklonalen Antikörper aus der Maus zu gewinnen, jedoch konnte dieser aus zeitlichen Gründen nicht mehr im Detektionssystem etabliert werden, was durchaus wünschenswert gewesen wäre. rnWeiterhin wurde Domoinsäure im Zuge der Entwicklung eines neuartigen Testsystems an die Trägerproteine Ovalbumin, Trypsininhibitor und Casein sowie an Biotin konjugiert. Die Kopplungserfolge wurden im ELISA, Western Blot bzw. Dot Blot nachgewiesen. Die Ovalbumin-gekoppelte sowie die biotinylierte Domoinsäure dienten im Folgenden als die zu messenden Größen in den Detektionsassays- die in einer zu untersuchenden Probe vorhandende, kompetitierende Domoinsäure wurde somit indirekt nachgewiesen. rnDer zulässige Höchstwert für Domoinsäure liegt bei 20 µg/g Muschelgewebe. Sowohl mit Biotin-DO als auch mit OVA-DO als den zu messenden Größen waren Domoinsäurekonzentrationen unterhalb dieses Grenzwertes nachweisbar; allerdings erwies sich der Aufbau mit Biotin-DO um das ca. 20-fache empfindlicher als jener mit OVA-DO. rnDie in dieser Arbeit präsentierten Ergebnisse könnten als Grundlage zur Etablierung eines kommerzialisierbaren immunologischen Testsystems zur Detektion von Domoinsäure und anderen Biotoxinen dienen. Nach erfolgreicher Validierung wäre ein solches Testsystem in seiner Handhabung einfacher als die gängige Flüssigkeitschromatographie und besser reproduzierbar als der Maus-Bioassay.rn
Resumo:
Roads and highways present a unique challenge to wildlife as they exhibit substantial impacts on the surrounding ecosystem through the interruption of a number of ecological processes. With new roads added to the national highway system every year, an understanding of these impacts is required for effective mitigation of potential environmental impacts. A major contributor to these negative effects is the deposition of chemicals used in winter deicing activities to nearby surface waters. These chemicals often vary in composition and may affect freshwater species differently. The negative impacts of widespread deposition of sodium chloride (NaCl) have prompted a search for an `environmentally friendly' alternative. However, little research has investigated the potential environmental effects of widespread use of these alternatives. Herein, I detail the results of laboratory tests and field surveys designed to determine the impacts of road salt (NaCl) and other chemical deicers on amphibian communities in Michigan's Upper Peninsula. Using larval amphibians I demonstrate the lethal impacts of a suite of chemical deicers on this sensitive, freshwater species. Larval wood frogs (Lithobates sylvatica) were tolerant of short-term (96 hours) exposure to urea (CH4N2O), sodium chloride (NaCl), and magnesium chloride (MgCl2). However, these larvae were very sensitive to acetate products (C8H12CaMgO8, CH3COOK) and calcium chloride (CaCl2). These differences in tolerance suggest that certain deicers may be more harmful to amphibians than others. Secondly, I expanded this analysis to include an experiment designed to determine the sublethal effects of chronic exposure to environmentally realistic concentrations of NaCl on two unique amphibian species, L. sylvatica and green frogs (L. clamitans). L. sylvatica tend to breed in small, ephemeral wetlands and metamorphose within a single season. However, L. clamitans breed primarily in more permanent wetlands and often remain as tadpoles for one year or more. These species employ different life history strategies in this region which may influence their response to chronic NaCl exposure. Both species demonstrated potentially harmful effects on individual fitness. L. sylvatica larvae had a high incidence of edema suggesting the NaCl exposure was a significant physiologic stressor to these larvae. L. clamitans larvae reduced tail length during their exposure which may affect adult fitness of these individuals. In order to determine the risk local amphibians face when using these roadside pools, I conducted a survey of the spatial distribution of chloride in the three northernmost counties of Michigan. This area receives a relatively low amount of NaCl which is confined to state and federal highways. The chloride concentrations in this region were much lower than those in urban systems; however, amphibians breeding in the local area may encounter harmful chloride levels arising from temporal variations in hydroperiods. Spatial variation of chloride levels suggests the road-effect zone for amphibians may be as large as 1000 m from a salt-treated highway. Lastly, I performed an analysis of the use of specific conductance to predict chloride concentrations in natural surface water bodies. A number of studies have used this regression to predict chloride concentrations from measurements of specific conductance. This method is often chosen in the place of ion chromatography due to budget and time constraints. However, using a regression method to characterize this relationship does not result in accurate chloride ion concentration estimates.
Resumo:
Mixed Reality (MR) aims to link virtual entities with the real world and has many applications such as military and medical domains [JBL+00, NFB07]. In many MR systems and more precisely in augmented scenes, one needs the application to render the virtual part accurately at the right time. To achieve this, such systems acquire data related to the real world from a set of sensors before rendering virtual entities. A suitable system architecture should minimize the delays to keep the overall system delay (also called end-to-end latency) within the requirements for real-time performance. In this context, we propose a compositional modeling framework for MR software architectures in order to specify, simulate and validate formally the time constraints of such systems. Our approach is first based on a functional decomposition of such systems into generic components. The obtained elements as well as their typical interactions give rise to generic representations in terms of timed automata. A whole system is then obtained as a composition of such defined components. To write specifications, a textual language named MIRELA (MIxed REality LAnguage) is proposed along with the corresponding compilation tools. The generated output contains timed automata in UPPAAL format for simulation and verification of time constraints. These automata may also be used to generate source code skeletons for an implementation on a MR platform. The approach is illustrated first on a small example. A realistic case study is also developed. It is modeled by several timed automata synchronizing through channels and including a large number of time constraints. Both systems have been simulated in UPPAAL and checked against the required behavioral properties.