967 resultados para Eliminate lost time
Resumo:
Frequency-transformed EEG resting data has been widely used to describe normal and abnormal brain functional states as function of the spectral power in different frequency bands. This has yielded a series of clinically relevant findings. However, by transforming the EEG into the frequency domain, the initially excellent time resolution of time-domain EEG is lost. The topographic time-frequency decomposition is a novel computerized EEG analysis method that combines previously available techniques from time-domain spatial EEG analysis and time-frequency decomposition of single-channel time series. It yields a new, physiologically and statistically plausible topographic time-frequency representation of human multichannel EEG. The original EEG is accounted by the coefficients of a large set of user defined EEG like time-series, which are optimized for maximal spatial smoothness and minimal norm. These coefficients are then reduced to a small number of model scalp field configurations, which vary in intensity as a function of time and frequency. The result is thus a small number of EEG field configurations, each with a corresponding time-frequency (Wigner) plot. The method has several advantages: It does not assume that the data is composed of orthogonal elements, it does not assume stationarity, it produces topographical maps and it allows to include user-defined, specific EEG elements, such as spike and wave patterns. After a formal introduction of the method, several examples are given, which include artificial data and multichannel EEG during different physiological and pathological conditions.
Resumo:
In the Mt. Olympos region of northeastern Greece, continental margin strata and basement rocks were subducted and metamorphosed under blueschist facies conditions, and thrust over carbonate platform strata during Alpine orogenesis. Subsequent exposure of the subducted basement rocks by normal faulting has allowed an integrated study of the timing of metamorphism, its relationship to deformation, and the thermal history of the subducted terrane. Alpine low-grade metamorphic assemblages occur at four structural levels. Three thrust sheets composed of Paleozoic granitic basement and Mesozoic metasedimentary cover were thrust over Mesozoic carbonate rocks and Eocene flysch; thrusting and metamorphism occurred first in the highest thrust sheets and progressed downward as units were imbricated from NE to SW. 40Ar/39Ar spectra from hornblende, white mica, and biotite samples indicate that the upper two units preserve evidence of four distinct thermal events: (1) 293–302 Ma crystallization of granites, with cooling from >550°C to <325°C by 284 Ma; (2) 98–100 Ma greenschist to blueschist-greenschist transition facies metamorphism (T∼350–500°C) and imbrication of continental thrust sheets; (3) 53–61 Ma blueschist facies metamorphism and deformation of the basement and continental margin units at T<350–400°C; (4) 36–40 Ma thrusting of blueschists over the carbonate platform, and metamorphism at T∼200–350°C. Only the Eocene and younger events affected the lower two structural packages. A fifth event, indicated by diffusive loss profiles in microcline spectra, reflects the beginning of uplift and cooling to T<100–150°C at 16–23 Ma, associated with normal faulting which continued until Quaternary time. Incomplete resetting of mica ages in all units constrains the temperature of metamorphism during continental subduction to T≤350°C, the closure temperature for Ar in muscovite. The diffusive loss profiles in micas and K-feldspars enable us to “see through” the younger events to older events in the high-T parts of the release spectra. Micas grown during earlier metamorphic events lost relatively small amounts of Ar during subsequent high pressure-low temperature metamorphism. Release spectra from phengites grown during Eocene metamorphism and deformation record the ages of the Ar-loss events. Alpine deformation in northern Greece occurred over a long time span (∼90 Ma), and involved subduction and episodic imbrication of continental basement before, during, and after the collision of the Apulian and Eurasian plates. Syn-subduction uplift and cooling probably combined with intermittently higher cooling rates during extensional events to preserve the blueschist facies mineral assemblages as they were exhumed from depths of >20 km. Extension in the Olympos region was synchronous with extension in the Mesohellenic trough and the Aegean back-arc, and concurrent with westward-progressing shortening in the external Hellenides.
Resumo:
On the Limits of Greenwich Mean Time, or The Failure of a Modernist Revolution From the introduction of World Standard Time in 1884 to Einstein’s theory of relativity, the nature and regulation of time was a highly contested issue in modernism, with profound political, social and epistemological consequences. Modernist aesthetic sensibilities widely revolted against the increasingly strict rule of the clock, which, as Georg Simmel observed in “The Metropolis and Mental Life,” was established as the necessary basis of a capitalist, urban life. This paper will focus on the contending conceptions of time arising in key modernist texts by authors like Joyce, Woolf and Conrad. I will argue that the uniformity and regularity of time necessary to a rising capitalist society came under attack in a similar way by both modernist literary aesthetics and new scientific discoveries. However, while Einstein’s theory of relativity may have led to a subsequent change of paradigm in scientific thought, it has failed to significantly alter social and popular conceptions of time. Although alternative ways of thinking and living with time are proposed by modernist authors, they remain isolated aesthetic experiments, ineffectual against the regulatory pressure of economic and social structures. In this struggle about the nature of time, so I suggest, science and literature join force against a society that is increasingly governed by economic reason. The fact that they lost this struggle can serve as a striking illustration of an increasing shift of social influence from science and art towards economy.
Resumo:
We report the first microbiological characterization of a terrestrial methane seep in a cryo-environment in the form of an Arctic hypersaline (~24% salinity), subzero (-5 C), perennial spring, arising through thick permafrost in an area with an average annual air temperature of -15 C. Bacterial and archaeal 16S rRNA gene clone libraries indicated a relatively low diversity of phylotypes within the spring sediment (Shannon index values of 1.65 and 1.39, respectively). Bacterial phylotypes were related to microorganisms such as Loktanella, Gillisia, Halomonas and Marinobacter spp. previously recovered from cold, saline habitats. A proportion of the bacterial phylotypes were cultured, including Marinobacter and Halomonas, with all isolates capable of growth at the in situ temperature (-5 C). Archaeal phylotypes were related to signatures from hypersaline deep-sea methane-seep sediments and were dominated by the anaerobic methane group 1a (ANME-1a) clade of anaerobic methane oxidizing archaea. CARD-FISH analyses indicated that cells within the spring sediment consisted of ~84.0% bacterial and 3.8% archaeal cells with ANME-1 cells accounting for most of the archaeal cells. The major gas discharging from the spring was methane (~50%) with the low CH4/C2 + ratio and hydrogen and carbon isotope signatures consistent with a thermogenic origin of the methane. Overall, this hypersaline, subzero environment supports a viable microbial community capable of activity at in situ temperature and where methane may behave as an energy and carbon source for sustaining anaerobic oxidation of methane-based microbial metabolism. This site also provides a model of how a methane seep can form in a cryo-environment as well as a mechanism for the hypothesized Martian methane plumes.
Resumo:
Traditional schemes for abstract interpretation-based global analysis of logic programs generally focus on obtaining procedure argument mode and type information. Variable sharing information is often given only the attention needed to preserve the correctness of the analysis. However, such sharing information can be very useful. In particular, it can be used for predicting runtime goal independence, which can eliminate costly run-time checks in and-parallel execution. In this paper, a new algorithm for doing abstract interpretation in logic programs is described which concentrates on inferring the dependencies of the terms bound to program variables with increased precisión and at all points in the execution of the program, rather than just at a procedure level. Algorithms are presented for computing abstract entry and success substitutions which extensively keep track of variable aliasing and term dependence information. In addition, a new, abstract domain independent ñxpoint algorithm is presented and described in detail. The algorithms are illustrated with examples. Finally, results from an implementation of the abstract interpreter are presented.
Resumo:
There are many the requirements that modern power converters should fulfill. Most of the applications where these converters are used, demand smaller converters with high efficiency, improved power density and a fast dynamic response. For instance, loads like microprocessors demand aggressive current steps with very high slew rates (100A/mus and higher); besides, during these load steps, the supply voltage of the microprocessor should be kept within tight limits in order to ensure its correct performance. The accomplishment of these requirements is not an easy task; complex solutions like advanced topologies - such as multiphase converters- as well as advanced control strategies are often needed. Besides, it is also necessary to operate the converter at high switching frequencies and to use capacitors with high capacitance and low ESR. Improving the dynamic response of power converters does not rely only on the control strategy but also the power topology should be suited to enable a fast dynamic response. Moreover, in later years, a fast dynamic response does not only mean accomplishing fast load steps but output voltage steps are gaining importance as well. At least, two applications that require fast voltage changes can be named: Low power microprocessors. In these devices, the voltage supply is changed according to the workload and the operating frequency of the microprocessor is changed at the same time. An important reduction in voltage dependent losses can be achieved with such changes. This technique is known as Dynamic Voltage Scaling (DVS). Another application where important energy savings can be achieved by means of changing the supply voltage are Radio Frequency Power Amplifiers. For example, RF architectures based on ‘Envelope Tracking’ and ‘Envelope Elimination and Restoration’ techniques can take advantage of voltage supply modulation and accomplish important energy savings in the power amplifier. However, in order to achieve these efficiency improvements, a power converter with high efficiency and high enough bandwidth (hundreds of kHz or even tens of MHz) is necessary in order to ensure an adequate supply voltage. The main objective of this Thesis is to improve the dynamic response of DC-DC converters from the point of view of the power topology. And the term dynamic response refers both to the load steps and the voltage steps; it is also interesting to modulate the output voltage of the converter with a specific bandwidth. In order to accomplish this, the question of what is it that limits the dynamic response of power converters should be answered. Analyzing this question leads to the conclusion that the dynamic response is limited by the power topology and specifically, by the filter inductance of the converter which is found in series between the input and the output of the converter. The series inductance is the one that determines the gain of the converter and provides the regulation capability. Although the energy stored in the filter inductance enables the regulation and the capability of filtering the output voltage, it imposes a limitation which is the concern of this Thesis. The series inductance stores energy and prevents the current from changing in a fast way, limiting the slew rate of the current through this inductor. Different solutions are proposed in the literature in order to reduce the limit imposed by the filter inductor. Many publications proposing new topologies and improvements to known topologies can be found in the literature. Also, complex control strategies are proposed with the objective of improving the dynamic response in power converters. In the proposed topologies, the energy stored in the series inductor is reduced; examples of these topologies are Multiphase converters, Buck converter operating at very high frequency or adding a low impedance path in parallel with the series inductance. Control techniques proposed in the literature, focus on adjusting the output voltage as fast as allowed by the power stage; examples of these control techniques are: hysteresis control, V 2 control, and minimum time control. In some of the proposed topologies, a reduction in the value of the series inductance is achieved and with this, the energy stored in this magnetic element is reduced; less stored energy means a faster dynamic response. However, in some cases (as in the high frequency Buck converter), the dynamic response is improved at the cost of worsening the efficiency. In this Thesis, a drastic solution is proposed: to completely eliminate the series inductance of the converter. This is a more radical solution when compared to those proposed in the literature. If the series inductance is eliminated, the regulation capability of the converter is limited which can make it difficult to use the topology in one-converter solutions; however, this topology is suitable for power architectures where the energy conversion is done by more than one converter. When the series inductor is eliminated from the converter, the current slew rate is no longer limited and it can be said that the dynamic response of the converter is independent from the switching frequency. This is the main advantage of eliminating the series inductor. The main objective, is to propose an energy conversion strategy that is done without series inductance. Without series inductance, no energy is stored between the input and the output of the converter and the dynamic response would be instantaneous if all the devices were ideal. If the energy transfer from the input to the output of the converter is done instantaneously when a load step occurs, conceptually it would not be necessary to store energy at the output of the converter (no output capacitor COUT would be needed) and if the input source is ideal, the input capacitor CIN would not be necessary. This last feature (no CIN with ideal VIN) is common to all power converters. However, when the concept is actually implemented, parasitic inductances such as leakage inductance of the transformer and the parasitic inductance of the PCB, cannot be avoided because they are inherent to the implementation of the converter. These parasitic elements do not affect significantly to the proposed concept. In this Thesis, it is proposed to operate the converter without series inductance in order to improve the dynamic response of the converter; however, on the other side, the continuous regulation capability of the converter is lost. It is said continuous because, as it will be explained throughout the Thesis, it is indeed possible to achieve discrete regulation; a converter without filter inductance and without energy stored in the magnetic element, is capable to achieve a limited number of output voltages. The changes between these output voltage levels are achieved in a fast way. The proposed energy conversion strategy is implemented by means of a multiphase converter where the coupling of the phases is done by discrete two-winding transformers instead of coupledinductors since transformers are, ideally, no energy storing elements. This idea is the main contribution of this Thesis. The feasibility of this energy conversion strategy is first analyzed and then verified by simulation and by the implementation of experimental prototypes. Once the strategy is proved valid, different options to implement the magnetic structure are analyzed. Three different discrete transformer arrangements are studied and implemented. A converter based on this energy conversion strategy would be designed with a different approach than the one used to design classic converters since an additional design degree of freedom is available. The switching frequency can be chosen according to the design specifications without penalizing the dynamic response or the efficiency. Low operating frequencies can be chosen in order to favor the efficiency; on the other hand, high operating frequencies (MHz) can be chosen in order to favor the size of the converter. For this reason, a particular design procedure is proposed for the ‘inductorless’ conversion strategy. Finally, applications where the features of the proposed conversion strategy (high efficiency with fast dynamic response) are advantageus, are proposed. For example, in two-stage power architectures where a high efficiency converter is needed as the first stage and there is a second stage that provides the fine regulation. Another example are RF power amplifiers where the voltage is modulated following an envelope reference in order to save power; in this application, a high efficiency converter, capable of achieving fast voltage steps is required. The main contributions of this Thesis are the following: The proposal of a conversion strategy that is done, ideally, without storing energy in the magnetic element. The validation and the implementation of the proposed energy conversion strategy. The study of different magnetic structures based on discrete transformers for the implementation of the proposed energy conversion strategy. To elaborate and validate a design procedure. To identify and validate applications for the proposed energy conversion strategy. It is important to remark that this work is done in collaboration with Intel. The particular features of the proposed conversion strategy enable the possibility of solving the problems related to microprocessor powering in a different way. For example, the high efficiency achieved with the proposed conversion strategy enables it as a good candidate to be used for power conditioning, as a first stage in a two-stage power architecture for powering microprocessors.
Resumo:
The type of signals obtained has conditioned chaos analysis tools. Almost in every case, they have analogue characteristics. But in certain cases, a chaotic digital signal is obtained and theses signals need a different approach than conventional analogue ones. The main objective of this paper will be to present some possible approaches to the study of this signals and how information about their characteristics may be obtained in the more straightforward possible way. We have obtained digital chaotic signals from an Optical Logic Cell with some feedback between output and one of the possible control gates. This chaos has been reported in several papers and its characteristics have been employed as a possible method to secure communications and as a way to encryption. In both cases, the influence of some perturbation in the transmission medium gave problems both for the synchronization of chaotic generators at emitter and receiver and for the recovering of information data. A proposed way to analyze the presence of some perturbation is to study the noise contents of transmitted signal and to implement a way to eliminate it. In our present case, the digital signal will be converted to a multilevel one by grouping bits in packets of 8 bits and applying conventional methods of time-frequency analysis to them. The results give information about the change in signals characteristics and hence some information about the noise or perturbations present. Equivalent representations to the phase and to the Feigenbaum diagrams for digital signals are employed in this case.
Resumo:
El objetivo de esta tesis es estudiar la dinámica de la capa logarítmica de flujos turbulentos de pared. En concreto, proponemos un nuevo modelo estructural utilizando diferentes tipos de estructuras coherentes: sweeps, eyecciones, grupos de vorticidad y streaks. La herramienta utilizada es la simulación numérica directa de canales turbulentos. Desde los primeros trabajos de Theodorsen (1952), las estructuras coherentes han jugado un papel fundamental para entender la organización y dinámica de los flujos turbulentos. A día de hoy, datos procedentes de simulaciones numéricas directas obtenidas en instantes no contiguos permiten estudiar las propiedades fundamentales de las estructuras coherentes tridimensionales desde un punto de vista estadístico. Sin embargo, la dinámica no puede ser entendida en detalle utilizando sólo instantes aislados en el tiempo, sino que es necesario seguir de forma continua las estructuras. Aunque existen algunos estudios sobre la evolución temporal de las estructuras más pequeñas a números de Reynolds moderados, por ejemplo Robinson (1991), todavía no se ha realizado un estudio completo a altos números de Reynolds y para todas las escalas presentes de la capa logarítmica. El objetivo de esta tesis es llevar a cabo dicho análisis. Los problemas más interesantes los encontramos en la región logarítmica, donde residen las cascadas de vorticidad, energía y momento. Existen varios modelos que intentan explicar la organización de los flujos turbulentos en dicha región. Uno de los más extendidos fue propuesto por Adrian et al. (2000) a través de observaciones experimentales y considerando como elemento fundamental paquetes de vórtices con forma de horquilla que actúan de forma cooperativa para generar rampas de bajo momento. Un modelo alternativo fué ideado por del Álamo & Jiménez (2006) utilizando datos numéricos. Basado también en grupos de vorticidad, planteaba un escenario mucho más desorganizado y con estructuras sin forma de horquilla. Aunque los dos modelos son cinemáticamente similares, no lo son desde el punto de vista dinámico, en concreto en lo que se refiere a la importancia que juega la pared en la creación y vida de las estructuras. Otro punto importante aún sin resolver se refiere al modelo de cascada turbulenta propuesto por Kolmogorov (1941b), y su relación con estructuras coherentes medibles en el flujo. Para dar respuesta a las preguntas anteriores, hemos desarrollado un nuevo método que permite seguir estructuras coherentes en el tiempo y lo hemos aplicado a simulaciones numéricas de canales turbulentos con números de Reynolds lo suficientemente altos como para tener un rango de escalas no trivial y con dominios computacionales lo suficientemente grandes como para representar de forma correcta la dinámica de la capa logarítmica. Nuestros esfuerzos se han desarrollado en cuatro pasos. En primer lugar, hemos realizado una campaña de simulaciones numéricas directas a diferentes números de Reynolds y tamaños de cajas para evaluar el efecto del dominio computacional en las estadísticas de primer orden y el espectro. A partir de los resultados obtenidos, hemos concluido que simulaciones con cajas de longitud 2vr y ancho vr veces la semi-altura del canal son lo suficientemente grandes para reproducir correctamente las interacciones entre estructuras coherentes de la capa logarítmica y el resto de escalas. Estas simulaciones son utilizadas como punto de partida en los siguientes análisis. En segundo lugar, las estructuras coherentes correspondientes a regiones con esfuerzos de Reynolds tangenciales intensos (Qs) en un canal turbulento han sido estudiadas extendiendo a tres dimensiones el análisis de cuadrantes, con especial énfasis en la capa logarítmica y la región exterior. Las estructuras coherentes han sido identificadas como regiones contiguas del espacio donde los esfuerzos de Reynolds tangenciales son más intensos que un cierto nivel. Los resultados muestran que los Qs separados de la pared están orientados de forma isótropa y su contribución neta al esfuerzo de Reynolds medio es nula. La mayor contribución la realiza una familia de estructuras de mayor tamaño y autosemejantes cuya parte inferior está muy cerca de la pared (ligada a la pared), con una geometría compleja y dimensión fractal « 2. Estas estructuras tienen una forma similar a una ‘esponja de placas’, en comparación con los grupos de vorticidad que tienen forma de ‘esponja de cuerdas’. Aunque el número de objetos decae al alejarnos de la pared, la fracción de esfuerzos de Reynolds que contienen es independiente de su altura, y gran parte reside en unas pocas estructuras que se extienden más allá del centro del canal, como en las grandes estructuras propuestas por otros autores. Las estructuras dominantes en la capa logarítmica son parejas de sweeps y eyecciones uno al lado del otro y con grupos de vorticidad asociados que comparten las dimensiones y esfuerzos con los remolinos ligados a la pared propuestos por Townsend. En tercer lugar, hemos estudiado la evolución temporal de Qs y grupos de vorticidad usando las simulaciones numéricas directas presentadas anteriormente hasta números de Reynolds ReT = 4200 (Reynolds de fricción). Las estructuras fueron identificadas siguiendo el proceso descrito en el párrafo anterior y después seguidas en el tiempo. A través de la interseción geométrica de estructuras pertenecientes a instantes de tiempo contiguos, hemos creado gratos de conexiones temporales entre todos los objetos y, a partir de ahí, definido ramas primarias y secundarias, de tal forma que cada rama representa la evolución temporal de una estructura coherente. Una vez que las evoluciones están adecuadamente organizadas, proporcionan toda la información necesaria para caracterizar la historia de las estructuras desde su nacimiento hasta su muerte. Los resultados muestran que las estructuras nacen a todas las distancias de la pared, pero con mayor probabilidad cerca de ella, donde la cortadura es más intensa. La mayoría mantienen tamaños pequeños y no viven mucho tiempo, sin embargo, existe una familia de estructuras que crecen lo suficiente como para ligarse a la pared y extenderse a lo largo de la capa logarítmica convirtiéndose en las estructuras observas anteriormente y descritas por Townsend. Estas estructuras son geométricamente autosemejantes con tiempos de vida proporcionales a su tamaño. La mayoría alcanzan tamaños por encima de la escala de Corrsin, y por ello, su dinámica está controlada por la cortadura media. Los resultados también muestran que las eyecciones se alejan de la pared con velocidad media uT (velocidad de fricción) y su base se liga a la pared muy rápidamente al inicio de sus vidas. Por el contrario, los sweeps se mueven hacia la pared con velocidad -uT y se ligan a ella más tarde. En ambos casos, los objetos permanecen ligados a la pared durante 2/3 de sus vidas. En la dirección de la corriente, las estructuras se desplazan a velocidades cercanas a la convección media del flujo y son deformadas por la cortadura. Finalmente, hemos interpretado la cascada turbulenta, no sólo como una forma conceptual de organizar el flujo, sino como un proceso físico en el cual las estructuras coherentes se unen y se rompen. El volumen de una estructura cambia de forma suave, cuando no se une ni rompe, o lo hace de forma repentina en caso contrario. Los procesos de unión y rotura pueden entenderse como una cascada directa (roturas) o inversa (uniones), siguiendo el concepto de cascada de remolinos ideado por Richardson (1920) y Obukhov (1941). El análisis de los datos muestra que las estructuras con tamaños menores a 30η (unidades de Kolmogorov) nunca se unen ni rompen, es decir, no experimentan el proceso de cascada. Por el contrario, aquellas mayores a 100η siempre se rompen o unen al menos una vez en su vida. En estos casos, el volumen total ganado y perdido es una fracción importante del volumen medio de la estructura implicada, con una tendencia ligeramente mayor a romperse (cascada directa) que a unirse (cascade inversa). La mayor parte de interacciones entre ramas se debe a roturas o uniones de fragmentos muy pequeños en la escala de Kolmogorov con estructuras más grandes, aunque el efecto de fragmentos de mayor tamaño no es despreciable. También hemos encontrado que las roturas tienen a ocurrir al final de la vida de la estructura y las uniones al principio. Aunque los resultados para la cascada directa e inversa no son idénticos, son muy simétricos, lo que sugiere un alto grado de reversibilidad en el proceso de cascada. ABSTRACT The purpose of the present thesis is to study the dynamics of the logarithmic layer of wall-bounded turbulent flows. Specifically, to propose a new structural model based on four different coherent structures: sweeps, ejections, clusters of vortices and velocity streaks. The tool used is the direct numerical simulation of time-resolved turbulent channels. Since the first work by Theodorsen (1952), coherent structures have played an important role in the understanding of turbulence organization and its dynamics. Nowadays, data from individual snapshots of direct numerical simulations allow to study the threedimensional statistical properties of those objects, but their dynamics can only be fully understood by tracking them in time. Although the temporal evolution has already been studied for small structures at moderate Reynolds numbers, e.g., Robinson (1991), a temporal analysis of three-dimensional structures spanning from the smallest to the largest scales across the logarithmic layer has yet to be performed and is the goal of the present thesis. The most interesting problems lie in the logarithmic region, which is the seat of cascades of vorticity, energy, and momentum. Different models involving coherent structures have been proposed to represent the organization of wall-bounded turbulent flows in the logarithmic layer. One of the most extended ones was conceived by Adrian et al. (2000) and built on packets of hairpins that grow from the wall and work cooperatively to gen- ´ erate low-momentum ramps. A different view was presented by del Alamo & Jim´enez (2006), who extracted coherent vortical structures from DNSs and proposed a less organized scenario. Although the two models are kinematically fairly similar, they have important dynamical differences, mostly regarding the relevance of the wall. Another open question is whether such a model can be used to explain the cascade process proposed by Kolmogorov (1941b) in terms of coherent structures. The challenge would be to identify coherent structures undergoing a turbulent cascade that can be quantified. To gain a better insight into the previous questions, we have developed a novel method to track coherent structures in time, and used it to characterize the temporal evolutions of eddies in turbulent channels with Reynolds numbers high enough to include a non-trivial range of length scales, and computational domains sufficiently long and wide to reproduce correctly the dynamics of the logarithmic layer. Our efforts have followed four steps. First, we have conducted a campaign of direct numerical simulations of turbulent channels at different Reynolds numbers and box sizes, and assessed the effect of the computational domain in the one-point statistics and spectra. From the results, we have concluded that computational domains with streamwise and spanwise sizes 2vr and vr times the half-height of the channel, respectively, are large enough to accurately capture the dynamical interactions between structures in the logarithmic layer and the rest of the scales. These simulations are used in the subsequent chapters. Second, the three-dimensional structures of intense tangential Reynolds stress in plane turbulent channels (Qs) have been studied by extending the classical quadrant analysis to three dimensions, with emphasis on the logarithmic and outer layers. The eddies are identified as connected regions of intense tangential Reynolds stress. Qs are then classified according to their streamwise and wall-normal fluctuating velocities as inward interactions, outward interactions, sweeps and ejections. It is found that wall-detached Qs are isotropically oriented background stress fluctuations, common to most turbulent flows, and do not contribute to the mean stress. Most of the stress is carried by a selfsimilar family of larger wall-attached Qs, increasingly complex away from the wall, with fractal dimensions « 2. They have shapes similar to ‘sponges of flakes’, while vortex clusters resemble ‘sponges of strings’. Although their number decays away from the wall, the fraction of the stress that they carry is independent of their heights, and a substantial part resides in a few objects extending beyond the centerline, reminiscent of the very large scale motions of several authors. The predominant logarithmic-layer structures are sideby- side pairs of sweeps and ejections, with an associated vortex cluster, and dimensions and stresses similar to Townsend’s conjectured wall-attached eddies. Third, the temporal evolution of Qs and vortex clusters are studied using time-resolved DNS data up to ReT = 4200 (friction Reynolds number). The eddies are identified following the procedure presented above, and then tracked in time. From the geometric intersection of structures in consecutive fields, we have built temporal connection graphs of all the objects, and defined main and secondary branches in a way that each branch represents the temporal evolution of one coherent structure. Once these evolutions are properly organized, they provide the necessary information to characterize eddies from birth to death. The results show that the eddies are born at all distances from the wall, although with higher probability near it, where the shear is strongest. Most of them stay small and do not last for long times. However, there is a family of eddies that become large enough to attach to the wall while they reach into the logarithmic layer, and become the wall-attached structures previously observed in instantaneous flow fields. They are geometrically self-similar, with sizes and lifetimes proportional to their distance from the wall. Most of them achieve lengths well above the Corrsin’ scale, and hence, their dynamics are controlled by the mean shear. Eddies associated with ejections move away from the wall with an average velocity uT (friction velocity), and their base attaches very fast at the beginning of their lives. Conversely, sweeps move towards the wall at -uT, and attach later. In both cases, they remain attached for 2/3 of their lives. In the streamwise direction, eddies are advected and deformed by the local mean velocity. Finally, we interpret the turbulent cascade not only as a way to conceptualize the flow, but as an actual physical process in which coherent structures merge and split. The volume of an eddy can change either smoothly, when they are not merging or splitting, or through sudden changes. The processes of merging and splitting can be thought of as a direct (when splitting) or an inverse (when merging) cascade, following the ideas envisioned by Richardson (1920) and Obukhov (1941). It is observed that there is a minimum length of 30η (Kolmogorov units) above which mergers and splits begin to be important. Moreover, all eddies above 100η split and merge at least once in their lives. In those cases, the total volume gained and lost is a substantial fraction of the average volume of the structure involved, with slightly more splits (direct cascade) than mergers. Most branch interactions are found to be the shedding or absorption of Kolmogorov-scale fragments by larger structures, but more balanced splits or mergers spanning a wide range of scales are also found to be important. The results show that splits are more probable at the end of the life of the eddy, while mergers take place at the beginning of the life. Although the results for the direct and the inverse cascades are not identical, they are found to be very symmetric, which suggests a high degree of reversibility of the cascade process.
Resumo:
Traffic flow time series data are usually high dimensional and very complex. Also they are sometimes imprecise and distorted due to data collection sensor malfunction. Additionally, events like congestion caused by traffic accidents add more uncertainty to real-time traffic conditions, making traffic flow forecasting a complicated task. This article presents a new data preprocessing method targeting multidimensional time series with a very high number of dimensions and shows its application to real traffic flow time series from the California Department of Transportation (PEMS web site). The proposed method consists of three main steps. First, based on a language for defining events in multidimensional time series, mTESL, we identify a number of types of events in time series that corresponding to either incorrect data or data with interference. Second, each event type is restored utilizing an original method that combines real observations, local forecasted values and historical data. Third, an exponential smoothing procedure is applied globally to eliminate noise interference and other random errors so as to provide good quality source data for future work.
Resumo:
The success of highly active anti-retroviral therapy (HAART) has inspired new concepts for eliminating HIV from infected individuals. A major obstacle is the persistence of long-lived reservoirs of latently infected cells that might become activated at some time after cessation of therapy. We propose that, in the context of treatment strategies to deliberately activate and eliminate these reservoirs, hybrid toxins targeted to kill HIV-infected cells be reconsidered in combination with HAART. Such combinations might also prove valuable in protocols aimed at preventing mother-to-child transmission and establishment of infection immediately after exposure to HIV. We suggest experimental approaches in vitro and in animal models to test various issues related to safety and efficacy of this concept.
Resumo:
Escherichia coli dihydrofolate reductase (DHFR; EC 1.5.1.3) contains five tryptophan residues that have been replaced with 6-19F-tryptophan. The 19F NMR assignments are known in the native, unliganded form and the unfolded form. We have used these assignments with stopped-flow 19F NMR spectroscopy to investigate the behavior of specific regions of the protein in real time during urea-induced unfolding. The NMR data show that within 1.5 sec most of the intensities of the native 19F resonances of the protein are lost but only a fraction (approximately 20%) of the intensities of the unfolded resonances appears. We postulate that the early disappearance of the native resonances indicates that most of the protein rapidly forms an intermediate in which the side chains have considerable mobility. Stopped-flow far-UV circular dichroism measurements indicate that this intermediate retains native-like secondary structure. Eighty percent of the intensities of the NMR resonances assigned to the individual tryptophans in the unfolded state appear with similar rate constants (k approximately 0.14 sec-1), consistent with the major phase of unfolding observed by stopped-flow circular dichroism (representing 80% of total amplitude). These data imply that after formation of the intermediate, which appears to represent an expanded structural form, all regions of the protein unfold at the same rate. Stopped-flow measurements of the fluorescence and circular dichroism changes associated with the urea-induced unfolding show a fast phase (half-time of about 1 sec) representing 20% of the total amplitude in addition to the slow phase mentioned above. The NMR data show that approximately 20% of the total intensity for each of the unfolded tryptophan resonances is present at 1.5 sec, indicating that these two phases may represent the complete unfolding of the two different populations of the native protein.
Resumo:
Scholars understandably devote a great deal of effort to studying how well patent law works to incentive the most important inventions. After all, these inventions form the foundation of our new technological age. But very little time is spent focusing on the other end of the spectrum, inventions that are no better than what the public already has. At first blush, studying such “horizontal” innovation seems pointless. But this inquiry actually reveals much about how patents can be used in unintended, and arguably, anticompetitive ways. This issue has roots in one unintuitive aspect of patent law. Despite the law’s goal of promoting innovation, patents can be obtained on inventions that are no better than existing technology. Such patents might appear worthless, but companies regularly obtain these patents to cover interfaces. That is because interface patents actually derive value from two distinct characteristics. First, they can have “innovation value” that is based on how much better the patentedinterface is than existing technology. Second, interface patents can also have “compatibility value.” In other words, the patented technology is often essential to make products operate (i.e. compatible) with a particular interface. In practical terms, this means that an interface patent that covers little or no meaningful advance can give a company the ability to extract rents and foreclose competition. This undesirable result is a consequence of how patent law has structured its remedies. For years patent law has implicitly awarded both innovation and compatibility values. Recently, the courts have taken a sensible first step and excluded compatibility value from reasonable royalty recoveries for standard essential patents. This Article argues that the law needs to go further and do the same for all essential interface patents. Additionally, patent law should reform the way it awards injunctions and lost profits to also exclude compatibility value. This proposal has two benefits. It would eliminate the incentives for wasteful patents on horizontal technology. Second, and more importantly, the value of all interfacepatents would be better aligned with the goals of the patent system.
Resumo:
These days as we are facing extremely powerful attacks on servers over the Internet (say, by the Advanced Persistent Threat attackers or by Surveillance by powerful adversary), Shamir has claimed that “Cryptography is Ineffective”and some understood it as “Cryptography is Dead!” In this talk I will discuss the implications on cryptographic systems design while facing such strong adversaries. Is crypto dead or we need to design it better, taking into account, mathematical constraints, but also systems vulnerability constraints. Can crypto be effective at all when your computer or your cloud is penetrated? What is lost and what can be saved? These are very basic issues at this point of time, when we are facing potential loss of privacy and security.