898 resultados para Time and hardware redundancy
Resumo:
OBJECTIVES: To determine characteristics associated with single and multiple fallers during postacute rehabilitation and to investigate the relationship among falls, rehabilitation outcomes, and health services use. DESIGN: Retrospective cohort study. SETTING: Geriatric postacute rehabilitation hospital. PARTICIPANTS: Patients (n = 4026) consecutively admitted over a 5-year period (2003-2007). MEASUREMENTS: All falls during hospitalization were prospectively recorded. Collected patients' characteristics included health, functional, cognitive, and affective status data. Length of stay and discharge destination were retrieved from the administrative database. RESULTS: During rehabilitation stay, 11.4% (458/4026) of patients fell once and an additional 6.3% (253/4026) fell several times. Compared with nonfallers, fallers were older and more frequently men. They were globally frailer, with lower Barthel score and more comorbidities, cognitive impairment, and depressive symptoms. In multivariate analyses, compared with 1-time fallers, multiple fallers were more likely to have lower Barthel score (adjOR: 2.45, 95% CI: 1.48-4.07; P = .001), cognitive impairment (adjOR: 1.43, 95% CI: 1.04-1.96; P = .026), and to have been admitted from a medicine ward (adjOR: 1.55, 95% CI: 1.03-2.32; P = .035). Odds of poor functional recovery and institutionalization at discharge, as well as length of stay, increased incrementally from nonfallers to 1-time and to multiple fallers. CONCLUSION: In these patients admitted to postacute rehabilitation, the proportion of fallers and multiple fallers was high. Multiple fallers were particularly at risk of poor functional recovery and increased health services use. Specific fall prevention programs targeting high-risk patients with cognitive impairment and low functional status should be developed in further studies.
Resumo:
The general strategy to perform anti-doping analyses of urine samples starts with the screening for a wide range of compounds. This step should be fast, generic and able to detect any sample that may contain a prohibited substance while avoiding false negatives and reducing false positive results. The experiments presented in this work were based on ultra-high-pressure liquid chromatography coupled to hybrid quadrupole time-of-flight mass spectrometry. Thanks to the high sensitivity of the method, urine samples could be diluted 2-fold prior to injection. One hundred and three forbidden substances from various classes (such as stimulants, diuretics, narcotics, anti-estrogens) were analysed on a C(18) reversed-phase column in two gradients of 9min (including two 3min equilibration periods) for positive and negative electrospray ionisation and detected in the MS full scan mode. The automatic identification of analytes was based on retention time and mass accuracy, with an automated tool for peak picking. The method was validated according to the International Standard for Laboratories described in the World Anti-Doping Code and was selective enough to comply with the World Anti-Doping Agency recommendations. In addition, the matrix effect on MS response was measured on all investigated analytes spiked in urine samples. The limits of detection ranged from 1 to 500ng/mL, allowing the identification of all tested compounds in urine. When a sample was reported positive during the screening, a fast additional pre-confirmatory step was performed to reduce the number of confirmatory analyses.
Resumo:
Les pressions écologiques peuvent varier tant en nature qu'en intensité dans le temps et l'espace. C'est pourquoi, un phénotype unique ne peut pas forcément conférer la meilleure valeur sélective. La plasticité phénotypique peut être un moyen de s'accommoder de cette situation, en augmentant globalement la tolérance aux changements environnementaux. Comme pour tout trait de caractère, une variation génétique doit persister pour qu'évoluent les traits plastiques dans une population donnée. Cependant, les pressions extérieures peuvent affecter l'héritabilité, et la direction de ces changements peut dépendre du caractère en question, de l'espèce mais aussi du type de stress. Dans la présente thèse, nous avons cherché à élucider les effets des pressions pathogéniques sur les phénotypes et la génétique quantitative de plusieurs traits plastiques chez les embryons de deux salmonidés, la palée (Coregonus palaea), et la truite de rivière (Salmo trutta). Les salmonidés se prêtent à de telles études du fait de leur extraordinaire variabilité morphologique, comportementale et des traits d'histoire de vie. Par ailleurs, avec le déclin des salmonidés dans le monde, il est important de savoir combien la variabilité génétique persiste dans les normes de réaction afin d'aider à prédire leur capacité à répondre aux changements de leur milieu. Nous avons observé qu'une augmentation de la croissance des communautés microbiennes symbiotiques entraînait une mortalité accrue et une éclosion précoce chez la palée, et dévoilait la variance génétique additive pour ces deux caractères (Chapitres 1-2). Bien qu'aucune variation génétique n'ait été trouvée pour les normes de réaction, nous avons observé une variabilité de la plasticité d'éclosion. Néanmoins, on a trouvé que les temps d'éclosion étaient corrélés entre les environnements, ce qui pourrait limiter l'évolution de la norme de réaction. Le temps d'éclosion des embryons est lié à la taille des géniteurs mâles, ce qui indique des effets pléiotropiques. Dans le Chapitre 3, nous avons montré qu'une interaction triple entre la souche bactérienne {Pseudomonas fluorescens}, l'état de dévelopement de l'hôte ainsi que ses gènes ont une influence sur la mortalité, le temps d'éclosion et la taille des alevins de la palée. Nous avons démontré qu'une variation génétique subsistait généralement dans les normes de réaction des temps d'éclosion, mais rarement pour la taille des alevins, et jamais pour la mortalité. Dans le même temps, nous avons exhibé que des corrélations entre environnements dépendaient des caractères phénotypiques, mais contrairement au Chapitre 2, nous n'avons pas trouvé de preuve de corrélations transgénérationnelles. Le Chapitre 4 complète le chapitre précédent, en se plaçant du point de vue moléculaire, et décrit comment le traitement d'embryons avec P. fluorescens s'est traduit par une régulation négative d'expression du CMH-I indépendemment de la souche bactérienne. Nous avons non seulement trouvé une variation génétique des caractères phénotypiques moyens, mais aussi de la plasticité. Les deux derniers chapitres traitent de l'investigation, chez la truite de rivière, des différences spécifiques entre populations pour des normes de réaction induites par les pathogènes. Dans le Chapitre 5, nous avons illustré que le métissage entre des populations génétiquement distinctes n'affectait en rien la hauteur ou la forme des normes de réaction d'un trait précoce d'histoire de vie suite au traitement pathogénique. De surcroît, en dépit de l'éclosion tardive et de la réduction de la taille des alevins, le traitement n'a pas modifié la variation héritable des traits de caractère. D'autre part, dans le Chapitre 6, nous avons démontré que le traitement d'embryons avec des stimuli contenus dans l'eau de conspécifiques infectés a entraîné des réponses propre à chaque population en terme de temps d'éclosion ; néanmoins, nous avons observé peu de variabilité génétique des normes de réaction pour ce temps d'éclosion au sein des populations. - Ecological stressors can vary in type and intensity over space and time, and as such, a single phenotype may not confer the highest fitness. Phenotypic plasticity can act as a means to accommodate this situation, increasing overall tolerance to environmental change. As with any trait, for plastic traits to evolve in a population, genetic variation must persist. However, environmental stress can alter trait heritability, and the direction of this shift can be trait, species, and stressor-dependent. In this thesis, we sought to understand the effects of pathogen stressors on the phenotypes and genetic architecture of several plastic traits in the embryos of two salmonids, the whitefish (Coregonus palaea), and the brown trout (Salmo trutta). Salmonids lend themselves to such studies because their extraordinary variability in morphological, behavioral, and life-history traits. Also, with declines in salmonids worldwide, knowing how much genetic variability persists in reaction norms may help predict their ability to respond to environmental change. We found that increasing growth of symbiotic microbial communities increased mortality and induced hatching in whitefish, and released additive genetic variance for both traits (Chapters 1-2). While no genetic variation was found for survival reaction norms, we did find variability in hatching plasticity. Nevertheless, hatching time was correlated across environments, which could constrain evolution of the reaction norm. Hatching time in the induced environment was also correlated to sire size, indicating pleiotropic effects. In Chapter 3 we report that a three-way interaction between bacterial strain (Pseudomonas fluorescens), host developmental stage, and host genetics impacted mortality, hatching time, and hatchling size in whitefish. We also showed that genetic variation generally persisted in hatching age reaction norms, but rarely for hatchling length, and never for mortality. At the same time, we demonstrated that cross-environmental correlations were trait-dependent, and unlike Chapter 2, we found no evidence of cross-generational correlations. Chapter 4 expands on the previous chapter, moving to the molecular level, and describes how treatment of embryos with P. fluorescens resulted in strain-independent downregulation of MHC class I. Genetic variation was evident not only in trait means, but also in plasticity. In the last two chapters, we investigated population level differences in pathogen- induced reaction norms in brown trout. In Chapter 5, we found that interbreeding between genetically distinct populations did not affect the elevation or shapes of the reaction norms of early life-history traits after pathogen challenge. Moreover, despite delaying hatching and reducing larval length, treatment produced no discernable shifts in heritable variation in traits. On the other hand, in Chapter 6, we found that treatment of embryos with water-borne cues from infected conspecifics elicited population-specific responses in terms of hatching time; however, we found little evidence of genetic variability in hatching reaction norms within populations. We have made considerable progress in understanding how pathogen stressors affect various early life-history traits in salmonid embryos. We have demonstrated that the effect of a particular stressor on heritable variation in these traits can vary according to the trait and species under consideration, in addition to the developmental stage of the host. Moreover, we found evidence of genetic variability in some, but not all reaction norms in whitefish and brown trout.
Resumo:
Glucose has been considered the major, if not the exclusive, energy substrate for the brain. But under certain physiological and pathological conditions other substrates, namely monocarboxylates (lactate, pyruvate and ketone bodies), can contribute significantly to satisfy brain energy demands. These monocarboxylates need to be transported across the blood-brain barrier or out of astrocytes into the extracellular space and taken up into neurons. It has been shown that monocarboxylates are transported by a family of proton-linked transporters called monocarboxylate transporters (MCTs). In the central nervous system, MCT2 is the predominant neuronal isoform and little is known about the regulation of its expression. Noradrenaline (NA), insulin and IGF-1 were previously shown to enhance the expression of MCT2 in cultured cortical neurons via a translational mechanism. Here we demonstrate that the well known brain neurotrophic factor BDNF enhances MCT2 protein expression in cultured cortical neurons and in synaptoneurosome preparations in a time- and concentrationdependent manner without affecting MCT2 mRNA levels. We observed that BDNF induced MCT2 expression by activation of MAPK as well as PI3K/Akt/mTOR signaling pathways. Furthermore, we investigated the possible post-transcriptional regulation of MCT2 expression by a neuronal miRNA. Then, we demonstrated that BDNF enhanced MCT2 expression in the hippocampus in vivo, in parallel with some post-synaptic proteins such as PSD95 and AMPA receptor GluR2/3 subunits, and two immediate early genes Arc and Zif268 known to be expressed in conditions related to synaptic plasticity. In the last part, we demonstrated in vivo that a downregulation of hippocampal MCT2 via silencing with an appropriate lentiviral vector in mice caused an impairment of working memory without reference memory deficit. In conclusion, these results suggest that regulation of neuronal monocarboxylate transporter MCT2 expression could be a key event in the context of synaptic plasticity, allowing an adequate energy substrate supply in situations of altered synaptic efficacy. - Le glucose représente le substrat énergétique majeur pour le cerveau. Cependant, dans certaines conditions physiologiques ou pathologiques, le cerveau a la capacité d'utiliser des substrats énergéiques appartenant à la classe des monocarboxylates (lactate, pyruvate et corps cétoniques) afin de satisfaire ses besoins énergétiques. Ces monocarboxylates doivent être transportés à travers la barrière hématoencéphalique mais aussi hors des astrocytes vers l'espace extracellulaire puis re-captés par les neurones. Leur transport est assuré par une famillle de transporteurs aux monocarboxylates (MCTs). Dans le système nerveux central, les neurones expriment principalement l'isoforme MCT2 mais peu d'informations sont disponibles concernant la régulation de son expression. Il a été montré que la noradrénaline, l'insuline et l'IGF-1 induisent l'expression de MCT2 dans des cultures de neurones corticaux par un mécanisme traductionnel. Dans cette étude nous démontrons dans un premier temps que le facteur neurotrophique BDNF augmente l'expression de MCT2 à la fois dans des cultures de neurones corticaux et dans les préparations synaptoneurosomales selon un décours temporel et une gamme de concentrations propre. Aucun changement n'a été observé concernant les niveaux d'ARNm de MCT2. Nous avons observé que le BDNF induisait l'expression de MCT2 par l'activation simultanée des voies de signalisation MAPK et PI3K/Akt/mTOR. De plus, nous nous sommes intéressés à une potentielle régulation par les micro-ARNs de la synthèse de MCT2. Ensuite, nous avons démontré que le BDNF induit aussi l'expression de MCT2 dans l'hippocampe de la souris en parallèle avec d'autres protéines post-synaptiques telles que PSD95 et GluR2/3 et avec deux « immediate early genes » tels que Arc et Zif268 connus pour être exprimés dans des conditions de plasticité synaptique. Dans un dernier temps, nous avons démontré qu'une diminution d'expression de MCT2 induite par le biais d'un siRNA exprimé via un vecteur lentiviral dans l'hippocampe de souris générait des déficits de mémoire de travail sans affecter la mémoire de référence. En conclusion, ces résultats nous suggèrent que le transporteur aux monocarboxylates neuronal MCT2 serait essentiel pour l'apport énergétique du lactate pour les neurones dans des conditions de haute activité neuronale comme c'est le cas pendant les processus de plasticité synaptique.
Resumo:
El present treball constitueix un estudi i una anàlisi sobre les dades obtingudes del buidatge dels manlleus recollits en dos números de la revista catalana Time Out Barcelona. L’objectiu és aprofundir en el tema dels préstecs i extreure algunes conclusions sobre els tipus d’unitats manllevades i el seu tractament.
Resumo:
Aim Species distribution models (SDMs) based on current species ranges underestimate the potential distribution when projected in time and/or space. A multi-temporal model calibration approach has been suggested as an alternative, and we evaluate this using 13,000 years of data. Location Europe. Methods We used fossil-based records of presence for Picea abies, Abies alba and Fagus sylvatica and six climatic variables for the period 13,000 to 1000yr bp. To measure the contribution of each 1000-year time step to the total niche of each species (the niche measured by pooling all the data), we employed a principal components analysis (PCA) calibrated with data over the entire range of possible climates. Then we projected both the total niche and the partial niches from single time frames into the PCA space, and tested if the partial niches were more similar to the total niche than random. Using an ensemble forecasting approach, we calibrated SDMs for each time frame and for the pooled database. We projected each model to current climate and evaluated the results against current pollen data. We also projected all models into the future. Results Niche similarity between the partial and the total-SDMs was almost always statistically significant and increased through time. SDMs calibrated from single time frames gave different results when projected to current climate, providing evidence of a change in the species realized niches through time. Moreover, they predicted limited climate suitability when compared with the total-SDMs. The same results were obtained when projected to future climates. Main conclusions The realized climatic niche of species differed for current and future climates when SDMs were calibrated considering different past climates. Building the niche as an ensemble through time represents a way forward to a better understanding of a species' range and its ecology in a changing climate.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.
Resumo:
Dans cette thèse, nous étudions les aspects comportementaux d'agents qui interagissent dans des systèmes de files d'attente à l'aide de modèles de simulation et de méthodologies expérimentales. Chaque période les clients doivent choisir un prestataire de servivce. L'objectif est d'analyser l'impact des décisions des clients et des prestataires sur la formation des files d'attente. Dans un premier cas nous considérons des clients ayant un certain degré d'aversion au risque. Sur la base de leur perception de l'attente moyenne et de la variabilité de cette attente, ils forment une estimation de la limite supérieure de l'attente chez chacun des prestataires. Chaque période, ils choisissent le prestataire pour lequel cette estimation est la plus basse. Nos résultats indiquent qu'il n'y a pas de relation monotone entre le degré d'aversion au risque et la performance globale. En effet, une population de clients ayant un degré d'aversion au risque intermédiaire encoure généralement une attente moyenne plus élevée qu'une population d'agents indifférents au risque ou très averses au risque. Ensuite, nous incorporons les décisions des prestataires en leur permettant d'ajuster leur capacité de service sur la base de leur perception de la fréquence moyenne d'arrivées. Les résultats montrent que le comportement des clients et les décisions des prestataires présentent une forte "dépendance au sentier". En outre, nous montrons que les décisions des prestataires font converger l'attente moyenne pondérée vers l'attente de référence du marché. Finalement, une expérience de laboratoire dans laquelle des sujets jouent le rôle de prestataire de service nous a permis de conclure que les délais d'installation et de démantèlement de capacité affectent de manière significative la performance et les décisions des sujets. En particulier, les décisions du prestataire, sont influencées par ses commandes en carnet, sa capacité de service actuellement disponible et les décisions d'ajustement de capacité qu'il a prises, mais pas encore implémentées. - Queuing is a fact of life that we witness daily. We all have had the experience of waiting in line for some reason and we also know that it is an annoying situation. As the adage says "time is money"; this is perhaps the best way of stating what queuing problems mean for customers. Human beings are not very tolerant, but they are even less so when having to wait in line for service. Banks, roads, post offices and restaurants are just some examples where people must wait for service. Studies of queuing phenomena have typically addressed the optimisation of performance measures (e.g. average waiting time, queue length and server utilisation rates) and the analysis of equilibrium solutions. The individual behaviour of the agents involved in queueing systems and their decision making process have received little attention. Although this work has been useful to improve the efficiency of many queueing systems, or to design new processes in social and physical systems, it has only provided us with a limited ability to explain the behaviour observed in many real queues. In this dissertation we differ from this traditional research by analysing how the agents involved in the system make decisions instead of focusing on optimising performance measures or analysing an equilibrium solution. This dissertation builds on and extends the framework proposed by van Ackere and Larsen (2004) and van Ackere et al. (2010). We focus on studying behavioural aspects in queueing systems and incorporate this still underdeveloped framework into the operations management field. In the first chapter of this thesis we provide a general introduction to the area, as well as an overview of the results. In Chapters 2 and 3, we use Cellular Automata (CA) to model service systems where captive interacting customers must decide each period which facility to join for service. They base this decision on their expectations of sojourn times. Each period, customers use new information (their most recent experience and that of their best performing neighbour) to form expectations of sojourn time at the different facilities. Customers update their expectations using an adaptive expectations process to combine their memory and their new information. We label "conservative" those customers who give more weight to their memory than to the xiv Summary new information. In contrast, when they give more weight to new information, we call them "reactive". In Chapter 2, we consider customers with different degree of risk-aversion who take into account uncertainty. They choose which facility to join based on an estimated upper-bound of the sojourn time which they compute using their perceptions of the average sojourn time and the level of uncertainty. We assume the same exogenous service capacity for all facilities, which remains constant throughout. We first analyse the collective behaviour generated by the customers' decisions. We show that the system achieves low weighted average sojourn times when the collective behaviour results in neighbourhoods of customers loyal to a facility and the customers are approximately equally split among all facilities. The lowest weighted average sojourn time is achieved when exactly the same number of customers patronises each facility, implying that they do not wish to switch facility. In this case, the system has achieved the Nash equilibrium. We show that there is a non-monotonic relationship between the degree of risk-aversion and system performance. Customers with an intermediate degree of riskaversion typically achieve higher sojourn times; in particular they rarely achieve the Nash equilibrium. Risk-neutral customers have the highest probability of achieving the Nash Equilibrium. Chapter 3 considers a service system similar to the previous one but with risk-neutral customers, and relaxes the assumption of exogenous service rates. In this sense, we model a queueing system with endogenous service rates by enabling managers to adjust the service capacity of the facilities. We assume that managers do so based on their perceptions of the arrival rates and use the same principle of adaptive expectations to model these perceptions. We consider service systems in which the managers' decisions take time to be implemented. Managers are characterised by a profile which is determined by the speed at which they update their perceptions, the speed at which they take decisions, and how coherent they are when accounting for their previous decisions still to be implemented when taking their next decision. We find that the managers' decisions exhibit a strong path-dependence: owing to the initial conditions of the model, the facilities of managers with identical profiles can evolve completely differently. In some cases the system becomes "locked-in" into a monopoly or duopoly situation. The competition between managers causes the weighted average sojourn time of the system to converge to the exogenous benchmark value which they use to estimate their desired capacity. Concerning the managers' profile, we found that the more conservative Summary xv a manager is regarding new information, the larger the market share his facility achieves. Additionally, the faster he takes decisions, the higher the probability that he achieves a monopoly position. In Chapter 4 we consider a one-server queueing system with non-captive customers. We carry out an experiment aimed at analysing the way human subjects, taking on the role of the manager, take decisions in a laboratory regarding the capacity of a service facility. We adapt the model proposed by van Ackere et al (2010). This model relaxes the assumption of a captive market and allows current customers to decide whether or not to use the facility. Additionally the facility also has potential customers who currently do not patronise it, but might consider doing so in the future. We identify three groups of subjects whose decisions cause similar behavioural patterns. These groups are labelled: gradual investors, lumpy investors, and random investor. Using an autocorrelation analysis of the subjects' decisions, we illustrate that these decisions are positively correlated to the decisions taken one period early. Subsequently we formulate a heuristic to model the decision rule considered by subjects in the laboratory. We found that this decision rule fits very well for those subjects who gradually adjust capacity, but it does not capture the behaviour of the subjects of the other two groups. In Chapter 5 we summarise the results and provide suggestions for further work. Our main contribution is the use of simulation and experimental methodologies to explain the collective behaviour generated by customers' and managers' decisions in queueing systems as well as the analysis of the individual behaviour of these agents. In this way, we differ from the typical literature related to queueing systems which focuses on optimising performance measures and the analysis of equilibrium solutions. Our work can be seen as a first step towards understanding the interaction between customer behaviour and the capacity adjustment process in queueing systems. This framework is still in its early stages and accordingly there is a large potential for further work that spans several research topics. Interesting extensions to this work include incorporating other characteristics of queueing systems which affect the customers' experience (e.g. balking, reneging and jockeying); providing customers and managers with additional information to take their decisions (e.g. service price, quality, customers' profile); analysing different decision rules and studying other characteristics which determine the profile of customers and managers.
Resumo:
The interaction of tunneling with groundwater is a problem both from an environmental and an engineering point of view. In fact, tunnel drilling may cause a drawdown of piezometric levels and water inflows into tunnels that may cause problems during excavation of the tunnel. While the influence of tunneling on the regional groundwater systems may be adequately predicted in porous media using analytical solutions, such an approach is difficult to apply in fractured rocks. Numerical solutions are preferable and various conceptual approaches have been proposed to describe and model groundwater flow through fractured rock masses, ranging from equivalent continuum models to discrete fracture network simulation models. However, their application needs many preliminary investigations on the behavior of the groundwater system based on hydrochemical and structural data. To study large scale flow systems in fractured rocks of mountainous terrains, a comprehensive study was conducted in southern Switzerland, using as case studies two infrastructures actually under construction: (i) the Monte Ceneri base railway tunnel (Ticino), and the (ii) San Fedele highway tunnel (Roveredo, Graubiinden). The chosen approach in this study combines the temporal and spatial variation of geochemical and geophysical measurements. About 60 localities from both surface and underlying tunnels were temporarily and spatially monitored during more than one year. At first, the project was focused on the collection of hydrochemical and structural data. A number of springs, selected in the area surrounding the infrastructures, were monitored for discharge, electric conductivity, pH, and temperature. Water samples (springs, tunnel inflows and rains) were taken for isotopic analysis; in particular the stable isotope composition (δ2Η, δ180 values) can reflect the origin of the water, because of spatial (recharge altitude, topography, etc.) and temporal (seasonal) effects on precipitation which in turn strongly influence the isotopic composition of groundwater. Tunnel inflows in the accessible parts of the tunnels were also sampled and, if possible, monitored with time. Noble-gas concentrations and their isotope ratios were used in selected locations to better understand the origin and the circulation of the groundwater. In addition, electrical resistivity and VLF-type electromagnetic surveys were performed to identify water bearing fractures and/or weathered areas that could be intersected at depth during tunnel construction. The main goal of this work was to demonstrate that these hydrogeological data and geophysical methods, combined with structural and hydrogeological information, can be successfully used in order to develop hydrogeological conceptual models of the groundwater flow in regions to be exploited for tunnels. The main results of the project are: (i) to have successfully tested the application of electrical resistivity and VLF-electromagnetic surveys to asses water-bearing zones during tunnel drilling; (ii) to have verified the usefulness of noble gas, major ion and stable isotope compositions as proxies for the detection of faults and to understand the origin of the groundwater and its flow regimes (direct rain water infiltration or groundwater of long residence time); and (iii) to have convincingly tested the combined application of a geochemical and geophysical approach to assess and predict the vulnerability of springs to tunnel drilling. - L'interférence entre eaux souterraines et des tunnels pose des problèmes environnementaux et de génie civile. En fait, la construction d'un tunnel peut faire abaisser le niveau des nappes piézométriques et faire infiltrer de l'eau dans le tunnel et ainsi créer des problème pendant l'excavation. Alors que l'influence de la construction d'un tunnel sur la circulation régionale de l'eau souterraine dans des milieux poreux peut être prédite relativement facilement par des solution analytiques de modèles, ceci devient difficile dans des milieux fissurés. Dans ce cas-là, des solutions numériques sont préférables et plusieurs approches conceptuelles ont été proposées pour décrire et modéliser la circulation d'eau souterraine à travers les roches fissurées, en allant de modèles d'équivalence continue à des modèles de simulation de réseaux de fissures discrètes. Par contre, leur application demande des investigations importantes concernant le comportement du système d'eau souterraine basées sur des données hydrochimiques et structurales. Dans le but d'étudier des grands systèmes de circulation d'eau souterraine dans une région de montagnes, une étude complète a été fait en Suisse italienne, basée sur deux grandes infrastructures actuellement en construction: (i) Le tunnel ferroviaire de base du Monte Ceneri (Tessin) et (ii) le tunnel routière de San Fedele (Roveredo, Grisons). L'approche choisie dans cette étude est la combinaison de variations temporelles et spatiales des mesures géochimiques et géophysiques. Environs 60 localités situées à la surface ainsi que dans les tunnels soujacents ont été suiviès du point de vue temporel et spatial pendant plus de un an. Dans un premier temps le projet se focalisait sur la collecte de données hydrochimiques et structurales. Un certain nombre de sources, sélectionnées dans les environs des infrastructures étudiées ont été suivies pour le débit, la conductivité électrique, le pH et la température. De l'eau (sources, infiltration d'eau de tunnel et pluie) a été échantillonnés pour des analyses isotopiques; ce sont surtout les isotopes stables (δ2Η, δ180) qui peuvent indiquer l'origine d'une eaux, à cause de la dépendance d'effets spatiaux (altitude de recharge, topographie etc.) ainsi que temporels (saisonaux) sur les précipitations météoriques , qui de suite influencent ainsi la composition isotopique de l'eau souterraine. Les infiltrations d'eau dans les tunnels dans les parties accessibles ont également été échantillonnées et si possible suivies au cours du temps. La concentration de gaz nobles et leurs rapports isotopiques ont également été utilisées pour quelques localités pour mieux comprendre l'origine et la circulation de l'eau souterraine. En plus, des campagnes de mesures de la résistivité électrique et électromagnétique de type VLF ont été menées afin d'identifier des zone de fractures ou d'altération qui pourraient interférer avec les tunnels en profondeur pendant la construction. Le but principal de cette étude était de démontrer que ces données hydrogéologiques et géophysiques peuvent être utilisées avec succès pour développer des modèles hydrogéologiques conceptionels de tunnels. Les résultats principaux de ce travail sont : i) d'avoir testé avec succès l'application de méthodes de la tomographie électrique et des campagnes de mesures électromagnétiques de type VLF afin de trouver des zones riches en eau pendant l'excavation d'un tunnel ; ii) d'avoir prouvé l'utilité des gaz nobles, des analyses ioniques et d'isotopes stables pour déterminer l'origine de l'eau infiltrée (de la pluie par le haut ou ascendant de l'eau remontant des profondeurs) et leur flux et pour déterminer la position de failles ; et iii) d'avoir testé d'une manière convainquant l'application combinée de méthodes géochimiques et géophysiques pour juger et prédire la vulnérabilité de sources lors de la construction de tunnels. - L'interazione dei tunnel con il circuito idrico sotterraneo costituisce un problema sia dal punto di vista ambientale che ingegneristico. Lo scavo di un tunnel puô infatti causare abbassamenti dei livelli piezometrici, inoltre le venute d'acqua in galleria sono un notevole problema sia in fase costruttiva che di esercizio. Nel caso di acquiferi in materiale sciolto, l'influenza dello scavo di un tunnel sul circuito idrico sotterraneo, in genere, puô essere adeguatamente predetta attraverso l'applicazione di soluzioni analitiche; al contrario un approccio di questo tipo appare inadeguato nel caso di scavo in roccia. Per gli ammassi rocciosi fratturati sono piuttosto preferibili soluzioni numeriche e, a tal proposito, sono stati proposti diversi approcci concettuali; nella fattispecie l'ammasso roccioso puô essere modellato come un mezzo discreto ο continuo équivalente. Tuttavia, una corretta applicazione di qualsiasi modello numerico richiede necessariamente indagini preliminari sul comportamento del sistema idrico sotterraneo basate su dati idrogeochimici e geologico strutturali. Per approfondire il tema dell'idrogeologia in ammassi rocciosi fratturati tipici di ambienti montani, è stato condotto uno studio multidisciplinare nel sud della Svizzera sfruttando come casi studio due infrastrutture attualmente in costruzione: (i) il tunnel di base del Monte Ceneri (canton Ticino) e (ii) il tunnel autostradale di San Fedele (Roveredo, canton Grigioni). L'approccio di studio scelto ha cercato di integrare misure idrogeochimiche sulla qualité e quantité delle acque e indagini geofisiche. Nella fattispecie sono state campionate le acque in circa 60 punti spazialmente distribuiti sia in superficie che in sotterraneo; laddove possibile il monitoraggio si è temporalmente prolungato per più di un anno. In una prima fase, il progetto di ricerca si è concentrato sull'acquisizione dati. Diverse sorgenti, selezionate nelle aree di possibile influenza attorno allé infrastrutture esaminate, sono state monitorate per quel che concerne i parametri fisico-chimici: portata, conduttività elettrica, pH e temperatura. Campioni d'acqua sono stati prelevati mensilmente su sorgenti, venute d'acqua e precipitazioni, per analisi isotopiche; nella fattispecie, la composizione in isotopi stabili (δ2Η, δ180) tende a riflettere l'origine delle acque, in quanto, variazioni sia spaziali (altitudine di ricarica, topografia, etc.) che temporali (variazioni stagionali) della composizione isotopica delle precipitazioni influenzano anche le acque sotterranee. Laddove possibile, sono state campionate le venute d'acqua in galleria sia puntualmente che al variare del tempo. Le concentrazioni dei gas nobili disciolti nell'acqua e i loro rapporti isotopici sono stati altresi utilizzati in alcuni casi specifici per meglio spiegare l'origine delle acque e le tipologie di circuiti idrici sotterranei. Inoltre, diverse indagini geofisiche di resistività elettrica ed elettromagnetiche a bassissima frequenza (VLF) sono state condotte al fine di individuare le acque sotterranee circolanti attraverso fratture dell'ammasso roccioso. Principale obiettivo di questo lavoro è stato dimostrare come misure idrogeochimiche ed indagini geofisiche possano essere integrate alio scopo di sviluppare opportuni modelli idrogeologici concettuali utili per lo scavo di opere sotterranee. I principali risultati ottenuti al termine di questa ricerca sono stati: (i) aver testato con successo indagini geofisiche (ERT e VLF-EM) per l'individuazione di acque sotterranee circolanti attraverso fratture dell'ammasso roccioso e che possano essere causa di venute d'acqua in galleria durante lo scavo di tunnel; (ii) aver provato l'utilità di analisi su gas nobili, ioni maggiori e isotopi stabili per l'individuazione di faglie e per comprendere l'origine delle acque sotterranee (acque di recente infiltrazione ο provenienti da circolazioni profonde); (iii) aver testato in maniera convincente l'integrazione delle indagini geofisiche e di misure geochimiche per la valutazione della vulnérabilité delle sorgenti durante lo scavo di nuovi tunnel. - "La NLFA (Nouvelle Ligne Ferroviaire à travers les Alpes) axe du Saint-Gothard est le plus important projet de construction de Suisse. En bâtissant la nouvelle ligne du Saint-Gothard, la Suisse réalise un des plus grands projets de protection de l'environnement d'Europe". Cette phrase, qu'on lit comme présentation du projet Alptransit est particulièrement éloquente pour expliquer l'utilité des nouvelles lignes ferroviaires transeuropéens pour le développement durable. Toutefois, comme toutes grandes infrastructures, la construction de nouveaux tunnels ont des impacts inévitables sur l'environnement. En particulier, le possible drainage des eaux souterraines réalisées par le tunnel peut provoquer un abaissement du niveau des nappes piézométriques. De plus, l'écoulement de l'eau à l'intérieur du tunnel, conduit souvent à des problèmes d'ingénierie. Par exemple, d'importantes infiltrations d'eau dans le tunnel peuvent compliquer les phases d'excavation, provoquant un retard dans l'avancement et dans le pire des cas, peuvent mettre en danger la sécurité des travailleurs. Enfin, l'infiltration d'eau peut être un gros problème pendant le fonctionnement du tunnel. Du point de vue de la science, avoir accès à des infrastructures souterraines représente une occasion unique d'obtenir des informations géologiques en profondeur et pour échantillonner des eaux autrement inaccessibles. Dans ce travail, nous avons utilisé une approche pluridisciplinaire qui intègre des mesures d'étude hydrogéochimiques effectués sur les eaux de surface et des investigations géophysiques indirects, tels que la tomographic de résistivité électrique (TRE) et les mesures électromagnétiques de type VLF. L'étude complète a été fait en Suisse italienne, basée sur deux grandes infrastructures actuellement en construction, qui sont le tunnel ferroviaire de base du Monte Ceneri, une partie du susmentionné projet Alptransit, situé entièrement dans le canton Tessin, et le tunnel routière de San Fedele, situé a Roveredo dans le canton des Grisons. Le principal objectif était de montrer comment il était possible d'intégrer les deux approches, géophysiques et géochimiques, afin de répondre à la question de ce que pourraient être les effets possibles dû au drainage causés par les travaux souterrains. L'accès aux galeries ci-dessus a permis une validation adéquate des enquêtes menées confirmant, dans chaque cas, les hypothèses proposées. A cette fin, nous avons fait environ 50 profils géophysiques (28 imageries électrique bidimensionnels et 23 électromagnétiques) dans les zones de possible influence par le tunnel, dans le but d'identifier les fractures et les discontinuités dans lesquelles l'eau souterraine peut circuler. De plus, des eaux ont été échantillonnés dans 60 localités situées la surface ainsi que dans les tunnels subjacents, le suivi mensuelle a duré plus d'un an. Nous avons mesurés tous les principaux paramètres physiques et chimiques: débit, conductivité électrique, pH et température. De plus, des échantillons d'eaux ont été prélevés pour l'analyse mensuelle des isotopes stables de l'hydrogène et de l'oxygène (δ2Η, δ180). Avec ces analyses, ainsi que par la mesure des concentrations des gaz rares dissous dans les eaux et de leurs rapports isotopiques que nous avons effectués dans certains cas spécifiques, il était possible d'expliquer l'origine des différents eaux souterraines, les divers modes de recharge des nappes souterraines, la présence de possible phénomènes de mélange et, en général, de mieux expliquer les circulations d'eaux dans le sous-sol. Le travail, même en constituant qu'une réponse partielle à une question très complexe, a permis d'atteindre certains importants objectifs. D'abord, nous avons testé avec succès l'applicabilité des méthodes géophysiques indirectes (TRE et électromagnétiques de type VLF) pour prédire la présence d'eaux souterraines dans le sous-sol des massifs rocheux. De plus, nous avons démontré l'utilité de l'analyse des gaz rares, des isotopes stables et de l'analyses des ions majeurs pour la détection de failles et pour comprendre l'origine des eaux souterraines (eau de pluie par le haut ou eau remontant des profondeurs). En conclusion, avec cette recherche, on a montré que l'intégration des ces informations (géophysiques et géochimiques) permet le développement de modèles conceptuels appropriés, qui permettant d'expliquer comment l'eau souterraine circule. Ces modèles permettent de prévoir les infiltrations d'eau dans les tunnels et de prédire la vulnérabilité de sources et des autres ressources en eau lors de construction de tunnels.
Resumo:
A hallmark of aging is the sensorimotor deficit, characterized by an increased reaction time and a reduction of motor abilities. Some mechanisms such as motor inhibition deteriorate with aging because of neuronal density alterations and modifications of connections between brain regions. These deficits may be compensated throughout a recruitment of additional areas. Studies have shown that old adults have increased difficulty in performing bimanual coordination tasks compared with young adults. In contrast, motor switching is poorly documented and is expected to engage increasing resources in the elderly. The present study examines performances and electro-cortical correlates of motor switching in young and elderly adults.
Resumo:
Recent multisensory research has emphasized the occurrence of early, low-level interactions in humans. As such, it is proving increasingly necessary to also consider the kinds of information likely extracted from the unisensory signals that are available at the time and location of these interaction effects. This review addresses current evidence regarding how the spatio-temporal brain dynamics of auditory information processing likely curtails the information content of multisensory interactions observable in humans at a given latency and within a given brain region. First, we consider the time course of signal propagation as a limitation on when auditory information (of any kind) can impact the responsiveness of a given brain region. Next, we overview the dual pathway model for the treatment of auditory spatial and object information ranging from rudimentary to complex environmental stimuli. These dual pathways are considered an intrinsic feature of auditory information processing, which are not only partially distinct in their associated brain networks, but also (and perhaps more importantly) manifest only after several tens of milliseconds of cortical signal processing. This architecture of auditory functioning would thus pose a constraint on when and in which brain regions specific spatial and object information are available for multisensory interactions. We then separately consider evidence regarding mechanisms and dynamics of spatial and object processing with a particular emphasis on when discriminations along either dimension are likely performed by specific brain regions. We conclude by discussing open issues and directions for future research.
Resumo:
PURPOSE: To compare volume-targeted and whole-heart coronary magnetic resonance angiography (MRA) after the administration of an intravascular contrast agent. MATERIALS AND METHODS: Six healthy adult subjects underwent a navigator-gated and -corrected (NAV) free breathing volume-targeted cardiac-triggered inversion recovery (IR) 3D steady-state free precession (SSFP) coronary MRA sequence (t-CMRA) (spatial resolution = 1 x 1 x 3 mm(3)) and high spatial resolution IR 3D SSFP whole-heart coronary MRA (WH-CMRA) (spatial resolution = 1 x 1 x 2 mm(3)) after the administration of an intravascular contrast agent B-22956. Subjective and objective image quality parameters including maximal visible vessel length, vessel sharpness, and visibility of coronary side branches were evaluated for both t-CMRA and WH-CMRA. RESULTS: No significant differences (P = NS) in image quality were observed between contrast-enhanced t-CMRA and WH-CMRA. However, using an intravascular contrast agent, significantly longer vessel segments were measured on WH-CMRA vs. t-CMRA (right coronary artery [RCA] 13.5 +/- 0.7 cm vs. 12.5 +/- 0.2 cm; P < 0.05; and left circumflex coronary artery [LCX] 11.9 +/- 2.2 cm vs. 6.9 +/- 2.4 cm; P < 0.05). Significantly more side branches (13.3 +/- 1.2 vs. 8.7 +/- 1.2; P < 0.05) were visible for the left anterior descending coronary artery (LAD) on WH-CMRA vs. t-CMRA. Scanning time and navigator efficiency were similar for both techniques (t-CMRA: 6.05 min; 49% vs. WH-CMRA: 5.51 min; 54%, both P = NS). CONCLUSION: Both WH-CMRA and t-CMRA using SSFP are useful techniques for coronary MRA after the injection of an intravascular blood-pool agent. However, the vessel conspicuity for high spatial resolution WH-CMRA is not inferior to t-CMRA, while visible vessel length and the number of visible smaller-diameter vessels and side-branches are improved.
Resumo:
BACKGROUND: Non-communicable diseases (NCDs) are increasing worldwide. We hypothesize that environmental factors (including social adversity, diet, lack of physical activity and pollution) can become "embedded" in the biology of humans. We also hypothesize that the "embedding" partly occurs because of epigenetic changes, i.e., durable changes in gene expression patterns. Our concern is that once such factors have a foundation in human biology, they can affect human health (including NCDs) over a long period of time and across generations. OBJECTIVES: To analyze how worldwide changes in movements of goods, persons and lifestyles (globalization) may affect the "epigenetic landscape" of populations and through this have an impact on NCDs. We provide examples of such changes and effects by discussing the potential epigenetic impact of socio-economic status, migration, and diet, as well as the impact of environmental factors influencing trends in age at puberty. DISCUSSION: The study of durable changes in epigenetic patterns has the potential to influence policy and practice; for example, by enabling stratification of populations into those who could particularly benefit from early interventions to prevent NCDs, or by demonstrating mechanisms through which environmental factors influence disease risk, thus providing compelling evidence for policy makers, companies and the civil society at large. The current debate on the '25 × 25 strategy', a goal of 25% reduction in relative mortality from NCDs by 2025, makes the proposed approach even more timely. CONCLUSIONS: Epigenetic modifications related to globalization may crucially contribute to explain current and future patterns of NCDs, and thus deserve attention from environmental researchers, public health experts, policy makers, and concerned citizens.
Resumo:
Soil treated with self-cementing fly ash is increasingly being used in Iowa to stabilize fine-grained pavement subgrades, but without a complete understanding of the short- and long-term behavior. To develop a broader understanding of fly ash engineering properties, mixtures of five different soil types, ranging from ML to CH, and several different fly ash sources (including hydrated and conditioned fly ashes) were evaluated. Results show that soil compaction characteristics, compressive strength, wet/dry durability, freeze/thaw durability, hydration characteristics, rate of strength gain, and plasticity characteristics are all affected by the addition of fly ash. Specifically, Iowa selfcementing fly ashes are effective at stabilizing fine-grained Iowa soils for earthwork and paving operations; fly ash increases compacted dry density and reduces the optimum moisture content; strength gain in soil-fly ash mixtures depends on cure time and temperature, compaction energy, and compaction delay; sulfur contents can form expansive minerals in soil–fly ash mixtures, which severely reduces the long-term strength and durability; fly ash increases the California bearing ratio of fine-grained soil–fly ash effectively dries wet soils and provides an initial rapid strength gain; fly ash decreases swell potential of expansive soils; soil-fly ash mixtures cured below freezing temperatures and then soaked in water are highly susceptible to slaking and strength loss; soil stabilized with fly ash exhibits increased freeze-thaw durability; soil strength can be increased with the addition of hydrated fly ash and conditioned fly ash, but at higher rates and not as effectively as self-cementing fly ash. Based on the results of this study, three proposed specifications were developed for the use of self-cementing fly ash, hydrated fly ash, and conditioned fly ash. The specifications describe laboratory evaluation, field placement, moisture conditioning, compaction, quality control testing procedures, and basis of payment.
Resumo:
Currently, no standard mix design procedure is available for CIR-emulsion in Iowa. The CIR-foam mix design process developed during the previous phase is applied for CIR-emulsion mixtures with varying emulsified asphalt contents. Dynamic modulus test, dynamic creep test, static creep test and raveling test were conducted to evaluate the short- and long-term performance of CIR-emulsion mixtures at various testing temperatures and loading conditions. A potential benefit of this research is a better understanding of CIR-emulsion material properties in comparison with those of CIR-foam material that would allow for the selection of the most appropriate CIR technology and the type and amount of the optimum stabilization material. Dynamic modulus, flow number and flow time of CIR-emulsion mixtures using CSS- 1h were generally higher than those of HFMS-2p. Flow number and flow time of CIR-emulsion using RAP materials from Story County was higher than those from Clayton County. Flow number and flow time of CIR-emulsion with 0.5% emulsified asphalt was higher than CIR-emulsion with 1.0% or 1.5%. Raveling loss of CIR-emulsion with 1.5% emulsified was significantly less than those with 0.5% and 1.0%. Test results in terms of dynamic modulus, flow number, flow time and raveling loss of CIR-foam mixtures are generally better than those of CIR-emulsion mixtures. Given the limited RAP sources used for this study, it is recommended that the CIR-emulsion mix design procedure should be validated against several RAP sources and emulsion types.