869 resultados para Just-in-time


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work provides a forward step in the study and comprehension of the relationships between stochastic processes and a certain class of integral-partial differential equation, which can be used in order to model anomalous diffusion and transport in statistical physics. In the first part, we brought the reader through the fundamental notions of probability and stochastic processes, stochastic integration and stochastic differential equations as well. In particular, within the study of H-sssi processes, we focused on fractional Brownian motion (fBm) and its discrete-time increment process, the fractional Gaussian noise (fGn), which provide examples of non-Markovian Gaussian processes. The fGn, together with stationary FARIMA processes, is widely used in the modeling and estimation of long-memory, or long-range dependence (LRD). Time series manifesting long-range dependence, are often observed in nature especially in physics, meteorology, climatology, but also in hydrology, geophysics, economy and many others. We deepely studied LRD, giving many real data examples, providing statistical analysis and introducing parametric methods of estimation. Then, we introduced the theory of fractional integrals and derivatives, which indeed turns out to be very appropriate for studying and modeling systems with long-memory properties. After having introduced the basics concepts, we provided many examples and applications. For instance, we investigated the relaxation equation with distributed order time-fractional derivatives, which describes models characterized by a strong memory component and can be used to model relaxation in complex systems, which deviates from the classical exponential Debye pattern. Then, we focused in the study of generalizations of the standard diffusion equation, by passing through the preliminary study of the fractional forward drift equation. Such generalizations have been obtained by using fractional integrals and derivatives of distributed orders. In order to find a connection between the anomalous diffusion described by these equations and the long-range dependence, we introduced and studied the generalized grey Brownian motion (ggBm), which is actually a parametric class of H-sssi processes, which have indeed marginal probability density function evolving in time according to a partial integro-differential equation of fractional type. The ggBm is of course Non-Markovian. All around the work, we have remarked many times that, starting from a master equation of a probability density function f(x,t), it is always possible to define an equivalence class of stochastic processes with the same marginal density function f(x,t). All these processes provide suitable stochastic models for the starting equation. Studying the ggBm, we just focused on a subclass made up of processes with stationary increments. The ggBm has been defined canonically in the so called grey noise space. However, we have been able to provide a characterization notwithstanding the underline probability space. We also pointed out that that the generalized grey Brownian motion is a direct generalization of a Gaussian process and in particular it generalizes Brownain motion and fractional Brownain motion as well. Finally, we introduced and analyzed a more general class of diffusion type equations related to certain non-Markovian stochastic processes. We started from the forward drift equation, which have been made non-local in time by the introduction of a suitable chosen memory kernel K(t). The resulting non-Markovian equation has been interpreted in a natural way as the evolution equation of the marginal density function of a random time process l(t). We then consider the subordinated process Y(t)=X(l(t)) where X(t) is a Markovian diffusion. The corresponding time-evolution of the marginal density function of Y(t) is governed by a non-Markovian Fokker-Planck equation which involves the same memory kernel K(t). We developed several applications and derived the exact solutions. Moreover, we considered different stochastic models for the given equations, providing path simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For the detection of climate change, not only the magnitude of a trend signal is of significance. An essential issue is the time period required by the trend to be detectable in the first place. An illustrative measure for this is time of emergence (ToE), that is, the point in time when a signal finally emerges from the background noise of natural variability. We investigate the ToE of trend signals in different biogeochemical and physical surface variables utilizing a multi-model ensemble comprising simulations of 17 Earth system models (ESMs). We find that signals in ocean biogeochemical variables emerge on much shorter timescales than the physical variable sea surface temperature (SST). The ToE patterns of pCO2 and pH are spatially very similar to DIC (dissolved inorganic carbon), yet the trends emerge much faster – after roughly 12 yr for the majority of the global ocean area, compared to between 10 and 30 yr for DIC. ToE of 45–90 yr are even larger for SST. In general, the background noise is of higher importance in determining ToE than the strength of the trend signal. In areas with high natural variability, even strong trends both in the physical climate and carbon cycle system are masked by variability over decadal timescales. In contrast to the trend, natural variability is affected by the seasonal cycle. This has important implications for observations, since it implies that intra-annual variability could question the representativeness of irregularly sampled seasonal measurements for the entire year and, thus, the interpretation of observed trends.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: Processes occurring in the course of psychotherapy are characterized by the simple fact that they unfold in time and that the multiple factors engaged in change processes vary highly between individuals (idiographic phenomena). Previous research, however, has neglected the temporal perspective by its traditional focus on static phenomena, which were mainly assessed at the group level (nomothetic phenomena). To support a temporal approach, the authors introduce time-series panel analysis (TSPA), a statistical methodology explicitly focusing on the quantification of temporal, session-to-session aspects of change in psychotherapy. TSPA-models are initially built at the level of individuals and are subsequently aggregated at the group level, thus allowing the exploration of prototypical models. Method: TSPA is based on vector auto-regression (VAR), an extension of univariate auto-regression models to multivariate time-series data. The application of TSPA is demonstrated in a sample of 87 outpatient psychotherapy patients who were monitored by postsession questionnaires. Prototypical mechanisms of change were derived from the aggregation of individual multivariate models of psychotherapy process. In a 2nd step, the associations between mechanisms of change (TSPA) and pre- to postsymptom change were explored. Results: TSPA allowed a prototypical process pattern to be identified, where patient's alliance and self-efficacy were linked by a temporal feedback-loop. Furthermore, therapist's stability over time in both mastery and clarification interventions was positively associated with better outcomes. Conclusions: TSPA is a statistical tool that sheds new light on temporal mechanisms of change. Through this approach, clinicians may gain insight into prototypical patterns of change in psychotherapy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective. To determine whether the use of a triage team would reduce the average time-in-department in a pediatric emergency department by 25%.^ Methods. A triage team consisting of a physician, a nurse, and a nurse's assistant initiated work-ups and saw patients who required minimal lab work-up and were likely to be discharged. Study days were randomized. Our inclusion criteria were all children seen in the emergency center between 6p and 2a Monday-Friday. Our exclusion criteria included resuscitations, inpatient-inpatient transfers, left without being seen, leaving against medical advice, any child seen outside of 6p-2am Monday-Friday and on the weekends. A Pearson-Chi square was used for comparison of the two groups for heterogeneity. For the time-in-department analysis, we performed a 2 sided t-test with a set alpha of 0.05 using Mann Whitney U looking for differences in time-in-department based on acuity level, disposition, and acuity level stratified by disposition. ^ Results. Among urgent and non-urgent patients, we found a statistically significant decrease in time-in-department in a pediatric emergency department. Urgent patients had a time-in-department that was 51 minutes shorter than patients seen on non-triage team days (p=0.007), which represents a 14% decrease in time-in-department. Non-urgent patients seen on triage team days had a time-in-department that was 24 minutes shorter than non-urgent patients seen on non-triage team days (p=0.009). From the disposition perspective, discharged patients seen on triage team days had a shorter time-in-department of 28 minutes as compared to those seen on non-triage team days (p=0.012). ^ Conclusion. Overall, there was a trend towards decreased time-in-department of 19 minutes (5.9% decrease) during triage team times. There was a statistically significant decrease in the time-in-department among urgent patients of 51 minutes (13.9% decrease) and among discharged patients of 28 minutes (8.4% decrease). Urgent care patients make up nearly a quarter of the emergency patient population and decreasing their time-in-department would likely make a significant impact on overall emergency flow.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Type 2 diabetes has grown to epidemic proportions in the U.S., and its prevalence has been steadily increasing in Texas. The physical activity levels in the population have remained low despite it being one of the primary preventive strategies for type 2 diabetes. The objectives of this study were to estimate the direct medical costs of type 2 diabetes attributable to not meeting physical activity Guidelines and to physical inactivity in the U.S. and Texas in 2007. This was a cross sectional study that used physical activity prevalence data from the 2007 Behavioral Risk Factor Surveillance System (BRFSS) to estimate the population attributable risk percentage (PAR%) for type 2 diabetes. These data were combined with the prevalence and cost data of type 2 diabetes to estimate the cost of type 2 diabetes attributable to not meeting Guidelines and to inactivity in the U.S. and Texas in 2007.^ The cost of type 2 diabetes in the U.S. in 2007, attributable to not meeting physical activity Guidelines was estimated to be $13.29 billion, and that attributable to physical inactivity (no leisure time physical activity) was estimated to be $3.32 billion. Depending on various assumptions, these estimates ranged from $7.61 billion to $41.48 billion for not meeting Guidelines, and $1.90 billion to $13.20 billion for physical inactivity in the U.S. in 2007. The cost of type 2 diabetes in Texas in 2007 attributable to not meeting physical activity Guidelines was estimated to be $1.15 billion, and that attributable to physical inactivity (no leisure time physical activity) was estimated to be $325 million. Depending on various assumptions, these estimates ranged from $800 million to $3.47 billion for not meeting Guidelines, and $186 million to $1.28 billion for physical inactivity in Texas in 2007. These results illustrate how much money could be saved annually just in terms of type 2 diabetes cost in the U.S. and Texas, if the entire adult population was active enough to meet physical activity Guidelines. Physical activity promotion, particularly at the environmental and policy level should be a priority in the population. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The project arises from the need to develop improved teaching methodologies in field of the mechanics of continuous media. The objective is to offer the student a learning process to acquire the necessary theoretical knowledge, cognitive skills and the responsibility and autonomy to professional development in this area. Traditionally the teaching of the concepts of these subjects was performed through lectures and laboratory practice. During these lessons the students attitude was usually passive, and therefore their effectiveness was poor. The proposed methodology has already been successfully employed in universities like University Bochum, Germany, University the South Australia and aims to improve the effectiveness of knowledge acquisition through use by the student of a virtual laboratory. This laboratory allows to adapt the curricula and learning techniques to the European Higher Education and improve current learning processes in the University School of Public Works Engineers -EUITOP- of the Technical University of Madrid -UPM-, due there are not laboratories in this specialization. The virtual space is created using a software platform built on OpenSim, manages 3D virtual worlds, and, language LSL -Linden Scripting Language-, which imprints specific powers to objects. The student or user can access this virtual world through their avatar -your character in the virtual world- and can perform practices within the space created for the purpose, at any time, just with computer with internet access and viewfinder. The virtual laboratory has three partitions. The virtual meeting rooms, where the avatar can interact with peers, solve problems and exchange existing documentation in the virtual library. The interactive game room, where the avatar is has to resolve a number of issues in time. And the video room where students can watch instructional videos and receive group lessons. Each audiovisual interactive element is accompanied by explanations framing it within the area of knowledge and enables students to begin to acquire a vocabulary and practice of the profession for which they are being formed. Plane elasticity concepts are introduced from the tension and compression testing of test pieces of steel and concrete. The behavior of reticulated and articulated structures is reinforced by some interactive games and concepts of tension, compression, local and global buckling will by tests to break articulated structures. Pure bending concepts, simple and composite torsion will be studied by observing a flexible specimen. Earthquake resistant design of buildings will be checked by a laboratory test video.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Equations for extreme runup worked out from several experimental studies are compared. Infragraviatory oscillations dominate the swash in a dissipative state but not in intermediate - reflective states. Therefore two kinds of equation depending on either significant wave height, H-0, or the Iribarren number, xi(0), should be used. Through a sand bed physical model with a uniform sand bed slope, equations are proposed for both beach states, and results are compared with precedent field and physical model experiments. Once the equations are chosen, the time-longshore variability in a medium - long term time scale of the foreshore slope is evaluated in two extreme cases relating to the Spanish coast. The Salinas beach on the North coast (Bay of Biscay) displayed a permanent dissipative beach state with small variations in the beach foreshore slope both along the shore and in time, so foreshore slope deviations in a medium-long term period were irrelevant and extreme runup is predicted with the wave height worked out from the design return period. Peniscola beach on the East coast (Mediterranean sea) displayed an intermediate state. If only time variations are analysed, variations in determining extreme runup are irrelevant. In contrast, significant differences were found when the longshore variations were studied in this Mediterranean beach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El objetivo de esta tesis es estudiar la dinámica de la capa logarítmica de flujos turbulentos de pared. En concreto, proponemos un nuevo modelo estructural utilizando diferentes tipos de estructuras coherentes: sweeps, eyecciones, grupos de vorticidad y streaks. La herramienta utilizada es la simulación numérica directa de canales turbulentos. Desde los primeros trabajos de Theodorsen (1952), las estructuras coherentes han jugado un papel fundamental para entender la organización y dinámica de los flujos turbulentos. A día de hoy, datos procedentes de simulaciones numéricas directas obtenidas en instantes no contiguos permiten estudiar las propiedades fundamentales de las estructuras coherentes tridimensionales desde un punto de vista estadístico. Sin embargo, la dinámica no puede ser entendida en detalle utilizando sólo instantes aislados en el tiempo, sino que es necesario seguir de forma continua las estructuras. Aunque existen algunos estudios sobre la evolución temporal de las estructuras más pequeñas a números de Reynolds moderados, por ejemplo Robinson (1991), todavía no se ha realizado un estudio completo a altos números de Reynolds y para todas las escalas presentes de la capa logarítmica. El objetivo de esta tesis es llevar a cabo dicho análisis. Los problemas más interesantes los encontramos en la región logarítmica, donde residen las cascadas de vorticidad, energía y momento. Existen varios modelos que intentan explicar la organización de los flujos turbulentos en dicha región. Uno de los más extendidos fue propuesto por Adrian et al. (2000) a través de observaciones experimentales y considerando como elemento fundamental paquetes de vórtices con forma de horquilla que actúan de forma cooperativa para generar rampas de bajo momento. Un modelo alternativo fué ideado por del Álamo & Jiménez (2006) utilizando datos numéricos. Basado también en grupos de vorticidad, planteaba un escenario mucho más desorganizado y con estructuras sin forma de horquilla. Aunque los dos modelos son cinemáticamente similares, no lo son desde el punto de vista dinámico, en concreto en lo que se refiere a la importancia que juega la pared en la creación y vida de las estructuras. Otro punto importante aún sin resolver se refiere al modelo de cascada turbulenta propuesto por Kolmogorov (1941b), y su relación con estructuras coherentes medibles en el flujo. Para dar respuesta a las preguntas anteriores, hemos desarrollado un nuevo método que permite seguir estructuras coherentes en el tiempo y lo hemos aplicado a simulaciones numéricas de canales turbulentos con números de Reynolds lo suficientemente altos como para tener un rango de escalas no trivial y con dominios computacionales lo suficientemente grandes como para representar de forma correcta la dinámica de la capa logarítmica. Nuestros esfuerzos se han desarrollado en cuatro pasos. En primer lugar, hemos realizado una campaña de simulaciones numéricas directas a diferentes números de Reynolds y tamaños de cajas para evaluar el efecto del dominio computacional en las estadísticas de primer orden y el espectro. A partir de los resultados obtenidos, hemos concluido que simulaciones con cajas de longitud 2vr y ancho vr veces la semi-altura del canal son lo suficientemente grandes para reproducir correctamente las interacciones entre estructuras coherentes de la capa logarítmica y el resto de escalas. Estas simulaciones son utilizadas como punto de partida en los siguientes análisis. En segundo lugar, las estructuras coherentes correspondientes a regiones con esfuerzos de Reynolds tangenciales intensos (Qs) en un canal turbulento han sido estudiadas extendiendo a tres dimensiones el análisis de cuadrantes, con especial énfasis en la capa logarítmica y la región exterior. Las estructuras coherentes han sido identificadas como regiones contiguas del espacio donde los esfuerzos de Reynolds tangenciales son más intensos que un cierto nivel. Los resultados muestran que los Qs separados de la pared están orientados de forma isótropa y su contribución neta al esfuerzo de Reynolds medio es nula. La mayor contribución la realiza una familia de estructuras de mayor tamaño y autosemejantes cuya parte inferior está muy cerca de la pared (ligada a la pared), con una geometría compleja y dimensión fractal « 2. Estas estructuras tienen una forma similar a una ‘esponja de placas’, en comparación con los grupos de vorticidad que tienen forma de ‘esponja de cuerdas’. Aunque el número de objetos decae al alejarnos de la pared, la fracción de esfuerzos de Reynolds que contienen es independiente de su altura, y gran parte reside en unas pocas estructuras que se extienden más allá del centro del canal, como en las grandes estructuras propuestas por otros autores. Las estructuras dominantes en la capa logarítmica son parejas de sweeps y eyecciones uno al lado del otro y con grupos de vorticidad asociados que comparten las dimensiones y esfuerzos con los remolinos ligados a la pared propuestos por Townsend. En tercer lugar, hemos estudiado la evolución temporal de Qs y grupos de vorticidad usando las simulaciones numéricas directas presentadas anteriormente hasta números de Reynolds ReT = 4200 (Reynolds de fricción). Las estructuras fueron identificadas siguiendo el proceso descrito en el párrafo anterior y después seguidas en el tiempo. A través de la interseción geométrica de estructuras pertenecientes a instantes de tiempo contiguos, hemos creado gratos de conexiones temporales entre todos los objetos y, a partir de ahí, definido ramas primarias y secundarias, de tal forma que cada rama representa la evolución temporal de una estructura coherente. Una vez que las evoluciones están adecuadamente organizadas, proporcionan toda la información necesaria para caracterizar la historia de las estructuras desde su nacimiento hasta su muerte. Los resultados muestran que las estructuras nacen a todas las distancias de la pared, pero con mayor probabilidad cerca de ella, donde la cortadura es más intensa. La mayoría mantienen tamaños pequeños y no viven mucho tiempo, sin embargo, existe una familia de estructuras que crecen lo suficiente como para ligarse a la pared y extenderse a lo largo de la capa logarítmica convirtiéndose en las estructuras observas anteriormente y descritas por Townsend. Estas estructuras son geométricamente autosemejantes con tiempos de vida proporcionales a su tamaño. La mayoría alcanzan tamaños por encima de la escala de Corrsin, y por ello, su dinámica está controlada por la cortadura media. Los resultados también muestran que las eyecciones se alejan de la pared con velocidad media uT (velocidad de fricción) y su base se liga a la pared muy rápidamente al inicio de sus vidas. Por el contrario, los sweeps se mueven hacia la pared con velocidad -uT y se ligan a ella más tarde. En ambos casos, los objetos permanecen ligados a la pared durante 2/3 de sus vidas. En la dirección de la corriente, las estructuras se desplazan a velocidades cercanas a la convección media del flujo y son deformadas por la cortadura. Finalmente, hemos interpretado la cascada turbulenta, no sólo como una forma conceptual de organizar el flujo, sino como un proceso físico en el cual las estructuras coherentes se unen y se rompen. El volumen de una estructura cambia de forma suave, cuando no se une ni rompe, o lo hace de forma repentina en caso contrario. Los procesos de unión y rotura pueden entenderse como una cascada directa (roturas) o inversa (uniones), siguiendo el concepto de cascada de remolinos ideado por Richardson (1920) y Obukhov (1941). El análisis de los datos muestra que las estructuras con tamaños menores a 30η (unidades de Kolmogorov) nunca se unen ni rompen, es decir, no experimentan el proceso de cascada. Por el contrario, aquellas mayores a 100η siempre se rompen o unen al menos una vez en su vida. En estos casos, el volumen total ganado y perdido es una fracción importante del volumen medio de la estructura implicada, con una tendencia ligeramente mayor a romperse (cascada directa) que a unirse (cascade inversa). La mayor parte de interacciones entre ramas se debe a roturas o uniones de fragmentos muy pequeños en la escala de Kolmogorov con estructuras más grandes, aunque el efecto de fragmentos de mayor tamaño no es despreciable. También hemos encontrado que las roturas tienen a ocurrir al final de la vida de la estructura y las uniones al principio. Aunque los resultados para la cascada directa e inversa no son idénticos, son muy simétricos, lo que sugiere un alto grado de reversibilidad en el proceso de cascada. ABSTRACT The purpose of the present thesis is to study the dynamics of the logarithmic layer of wall-bounded turbulent flows. Specifically, to propose a new structural model based on four different coherent structures: sweeps, ejections, clusters of vortices and velocity streaks. The tool used is the direct numerical simulation of time-resolved turbulent channels. Since the first work by Theodorsen (1952), coherent structures have played an important role in the understanding of turbulence organization and its dynamics. Nowadays, data from individual snapshots of direct numerical simulations allow to study the threedimensional statistical properties of those objects, but their dynamics can only be fully understood by tracking them in time. Although the temporal evolution has already been studied for small structures at moderate Reynolds numbers, e.g., Robinson (1991), a temporal analysis of three-dimensional structures spanning from the smallest to the largest scales across the logarithmic layer has yet to be performed and is the goal of the present thesis. The most interesting problems lie in the logarithmic region, which is the seat of cascades of vorticity, energy, and momentum. Different models involving coherent structures have been proposed to represent the organization of wall-bounded turbulent flows in the logarithmic layer. One of the most extended ones was conceived by Adrian et al. (2000) and built on packets of hairpins that grow from the wall and work cooperatively to gen- ´ erate low-momentum ramps. A different view was presented by del Alamo & Jim´enez (2006), who extracted coherent vortical structures from DNSs and proposed a less organized scenario. Although the two models are kinematically fairly similar, they have important dynamical differences, mostly regarding the relevance of the wall. Another open question is whether such a model can be used to explain the cascade process proposed by Kolmogorov (1941b) in terms of coherent structures. The challenge would be to identify coherent structures undergoing a turbulent cascade that can be quantified. To gain a better insight into the previous questions, we have developed a novel method to track coherent structures in time, and used it to characterize the temporal evolutions of eddies in turbulent channels with Reynolds numbers high enough to include a non-trivial range of length scales, and computational domains sufficiently long and wide to reproduce correctly the dynamics of the logarithmic layer. Our efforts have followed four steps. First, we have conducted a campaign of direct numerical simulations of turbulent channels at different Reynolds numbers and box sizes, and assessed the effect of the computational domain in the one-point statistics and spectra. From the results, we have concluded that computational domains with streamwise and spanwise sizes 2vr and vr times the half-height of the channel, respectively, are large enough to accurately capture the dynamical interactions between structures in the logarithmic layer and the rest of the scales. These simulations are used in the subsequent chapters. Second, the three-dimensional structures of intense tangential Reynolds stress in plane turbulent channels (Qs) have been studied by extending the classical quadrant analysis to three dimensions, with emphasis on the logarithmic and outer layers. The eddies are identified as connected regions of intense tangential Reynolds stress. Qs are then classified according to their streamwise and wall-normal fluctuating velocities as inward interactions, outward interactions, sweeps and ejections. It is found that wall-detached Qs are isotropically oriented background stress fluctuations, common to most turbulent flows, and do not contribute to the mean stress. Most of the stress is carried by a selfsimilar family of larger wall-attached Qs, increasingly complex away from the wall, with fractal dimensions « 2. They have shapes similar to ‘sponges of flakes’, while vortex clusters resemble ‘sponges of strings’. Although their number decays away from the wall, the fraction of the stress that they carry is independent of their heights, and a substantial part resides in a few objects extending beyond the centerline, reminiscent of the very large scale motions of several authors. The predominant logarithmic-layer structures are sideby- side pairs of sweeps and ejections, with an associated vortex cluster, and dimensions and stresses similar to Townsend’s conjectured wall-attached eddies. Third, the temporal evolution of Qs and vortex clusters are studied using time-resolved DNS data up to ReT = 4200 (friction Reynolds number). The eddies are identified following the procedure presented above, and then tracked in time. From the geometric intersection of structures in consecutive fields, we have built temporal connection graphs of all the objects, and defined main and secondary branches in a way that each branch represents the temporal evolution of one coherent structure. Once these evolutions are properly organized, they provide the necessary information to characterize eddies from birth to death. The results show that the eddies are born at all distances from the wall, although with higher probability near it, where the shear is strongest. Most of them stay small and do not last for long times. However, there is a family of eddies that become large enough to attach to the wall while they reach into the logarithmic layer, and become the wall-attached structures previously observed in instantaneous flow fields. They are geometrically self-similar, with sizes and lifetimes proportional to their distance from the wall. Most of them achieve lengths well above the Corrsin’ scale, and hence, their dynamics are controlled by the mean shear. Eddies associated with ejections move away from the wall with an average velocity uT (friction velocity), and their base attaches very fast at the beginning of their lives. Conversely, sweeps move towards the wall at -uT, and attach later. In both cases, they remain attached for 2/3 of their lives. In the streamwise direction, eddies are advected and deformed by the local mean velocity. Finally, we interpret the turbulent cascade not only as a way to conceptualize the flow, but as an actual physical process in which coherent structures merge and split. The volume of an eddy can change either smoothly, when they are not merging or splitting, or through sudden changes. The processes of merging and splitting can be thought of as a direct (when splitting) or an inverse (when merging) cascade, following the ideas envisioned by Richardson (1920) and Obukhov (1941). It is observed that there is a minimum length of 30η (Kolmogorov units) above which mergers and splits begin to be important. Moreover, all eddies above 100η split and merge at least once in their lives. In those cases, the total volume gained and lost is a substantial fraction of the average volume of the structure involved, with slightly more splits (direct cascade) than mergers. Most branch interactions are found to be the shedding or absorption of Kolmogorov-scale fragments by larger structures, but more balanced splits or mergers spanning a wide range of scales are also found to be important. The results show that splits are more probable at the end of the life of the eddy, while mergers take place at the beginning of the life. Although the results for the direct and the inverse cascades are not identical, they are found to be very symmetric, which suggests a high degree of reversibility of the cascade process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been widely documented that when Building Information Modelling (BIM) is used, there is a shift in effort to the design phase. Little investigation into the impact of this shift in effort has been done and how it impacts on costs. It can be difficult to justify the increased expenditure on BIM in a market that is heavily driven by costs. There are currently studies attempting to quantify the return on investment (ROI) for BIM for which these returns can be seen to balance out the shift in efforts and costs to the design phase. The studies however quantify the ROI based on the individual stakeholder’s investment without consideration for the impact that the use of BIM from their project partners may have on their own profitability. In this study, a questionnaire investigated opinions and experience of construction professionals, representing clients, consultants, designers and contractors, to determine fluctuations in costs by their magnitude and when they occur. These factors were examined more closely by interviewing senior members representing each of the stakeholder categories and comparing their experience in using BIM within environments where their project partners were also using BIM and when they were not. This determined the differences in how the use and the investment in BIM impacts on others and how costs are redistributed. This redistribution is not just through time but also between stakeholders and categories of costs. Some of these cost fluctuations and how the cost of BIM is currently financed are also highlighted in several case studies. The results show that the current distribution of costs set for traditional 2D delivery is hindering the potential success of BIM. There is also evidence that stakeholders who don’t use BIM may benefit financially from the BIM use of others and that collaborative BIM is significantly different to the use of ‘lonely’ BIM in terms of benefits and profitability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To quantify time caring, burden and health status in carers of stroke patients after discharge from rehabilitation; to identify the potentially modifiable sociodemographic and clinical characteristics associated with these outcomes. Methods: Patients and carers prospectively interviewed 6 (n = 71) and 12 (n = 57) months after discharge. Relationships of carer and patient variables with burden, health status and time analysed by Gaussian and Poisson regression. Results: Carers showed considerable burden at 6 and 12 months. Carers spent 4.6 and 3.6 hours per day assisting patients with daily activities at 6 and 12 months, respectively. Improved patient motor and cognitive function were associated with reductions of up to 20 minutes per day in time spent in daily activities. Better patient mental health and cognitive function were associated with better carer mental health. Conclusions: Potentially modifiable factors such as these may be able to be targeted by caregiver training, support and education programmes and outpatient therapy for patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A range of physical and engineering systems exhibit an irregular complex dynamics featuring alternation of quiet and burst time intervals called the intermittency. The intermittent dynamics most popular in laser science is the on-off intermittency [1]. The on-off intermittency can be understood as a conversion of the noise in a system close to an instability threshold into effective time-dependent fluctuations which result in the alternation of stable and unstable periods. The on-off intermittency has been recently demonstrated in semiconductor, Erbium doped and Raman lasers [2-5]. Recently demonstrated random distributed feedback (random DFB) fiber laser has an irregular dynamics near the generation threshold [6,7]. Here we show the intermittency in the cascaded random DFB fiber laser. We study intensity fluctuations in a random DFB fiber laser based on nitrogen doped fiber. The laser generates first and second Stokes components 1120 nm and 1180 nm respectively under an appropriate pumping. We study the intermittency in the radiation of the second Stokes wave. The typical time trace near the generation threshold of the second Stokes wave (Pth) is shown at Fig. 1a. From the number of long enough time-traces we calculate statistical distribution between major spikes in time dynamics, Fig. 1b. To eliminate contribution of high frequency components of spikes we use a low pass filter along with the reference value of the output power. Experimental data is fitted by power law, ~(P-Pth)y, where is a mean time between pikes. There are two different intermittency regimes. Just above Pth, the mean time is approximated by the -3/2 power law. The -3/2 power law is typical to the on-off intermittency with hopping between two states (first and second Stokes waves in our case) [7]. At higher power, the mean time is approximated by -4 power law, that indicates a change in intermittency type to multistate. Multistable dynamics is observed in erbium-doped fiber lasers [8]. The origin of multiples states in our system could be probably connected with polarization hopping or other reasons and should be further investigated. We have presented a first experimental statistical characterisation of the on-off and multistate intermittencies that occur in the generation of the second Stokes wave in nitrogen doped random DFB fiber laser. References [1] H. Fujisaka and T. Yamada, “A New Intermittency in Coupled Dynamical Systems,” Prog. Theor. Phys. 74, 918 (1985). [2] S. Osborne, A. Amann, D. Bitauld, and S. O’Brien, “On-off intermittency in an optically injected semiconductor laser,” Phys. Rev. E 85, 056204 (2012). [3] S. Sergeyev, K. O'Mahoney, S. Popov, and A. T. Friberg, “Coherence and anticoherence resonance in high-concentration erbium-doped fiber laser,” Opt. Lett. 35, 3736 (2010). [4] A.E. El-Taher, S.V. Sergeyev, E.G. Turitsyna, P. Harper, and S. K. Turitsyn, “Intermittent Self-Pulsing in a Fiber Raman Laser”, In proc. Conf. Nonlin. Photon., paper ID 1367139, Colorado Springs, USA, 2012 [5] S.K. Turitsyn, S.A. Babin, A.E. El-Taher, P. Harper, D.V. Churkin, S.I. Kablukov, J.D. Ania-Castañón, V. Karalekas, and E.V. Podivilov, “Random distributed feedback fibre laser”, Nat. Photon..4, 231 (2010). [6] I. D. Vatnik, D. V. Churkin, S. A. Babin, and S. K. Turitsyn, "Cascaded random distributed feedback Raman fiber laser operating at 1.2 μm," Opt. Express 19, 18486 (2011). [7] W. Feller, An introduction to probability theory and its applications, Vol. 1, 3rd ed. (Wiley, New-York, 1968). [8] G. Huerta-Cuellar, A.N. Pisarchik, and Y.O. Barmenkov, “Experimental characterization of hopping dynamics in a multistable fiber laser,” Phys. Rev. E 78, 035202(R) (2008).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

MSC 2010: 34A08 (main), 34G20, 80A25

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Climate change will exacerbate challenges facing food security in the UK. Increasing frequency and intensity of extreme weather events will further impact upon farm systems. At the heart of the impending challenges to UK agricultural production, farmers’ resilience will be tested to new limits. Research into farmers’ resilience to climate change in the UK is distinctly underdeveloped when compared to research in developing and other developed nations. This research gap is addressed through exploration of farmers’ resilience in the Welsh Marches, establishing the role of risk perceptions, local knowledge and adaptive capacity in farmers’ decision-making to limit climate shocks. Further contributions to agricultural geography are made through experimentation of a ‘cultural-behavioural approach’, seeking to revisit the behavioural approach in view of the cultural-turn. The Welsh Marches, situated on the English-Welsh border, has been selected as a focal point due to its agricultural diversity, and known experiences of extreme weather events. A phased mixed methodological approach is adopted. Phase one explores recorded and reported experiences of past extreme weather events in local meteorological records and local newspaper articles. Phase two consists of 115 survey-questionnaires, 15 in-depth semi-structured interviews, and a scenario based focus group with selected farmers from the Welsh Marches. This allows farmers’ resilience to climate change in the past, present and future to be explored. Original contributions to knowledge are made through demonstrating the value of focusing upon the culture of a specific farm community, applying a ‘bottom-up’ approach. The priority given to the weather in farmers’ decision-making is identified to be determined by individual relationships that farmers’ develop with the weather. Yet, a consensus of farmers’ observations has established recognition of considerable changes in the weather over the last 30 years, acknowledging more extremes and seasonal variations. In contrast, perceptions of future climate change are largely varied. Farmers are found to be disengaged with the communication of climate change science, as the global impacts portrayed are distant in time and place from probable impacts that may be experienced locally. Current communication of climate change information has been identified to alienate farmers from the local reality of probable future impacts. Adaptation options and responses to extreme weather and climate change are identified from measures found to be already implemented and considered for the future. A greater need to explore local knowledge and risk perception in relation to farmers’ understanding of future climate challenges is clear. There is a need to conduct comparable research in different farm communities across the UK. Progression into establishing the role of farmers’ resilience in responding effectively to future climate challenges has only just begun.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.