823 resultados para Boolean Computations
Resumo:
A discussion of nonlinear dynamics, demonstrated by the familiar automobile, is followed by the development of a systematic method of analysis of a possibly nonlinear time series using difference equations in the general state-space format. This format allows recursive state-dependent parameter estimation after each observation thereby revealing the dynamics inherent in the system in combination with random external perturbations.^ The one-step ahead prediction errors at each time period, transformed to have constant variance, and the estimated parametric sequences provide the information to (1) formally test whether time series observations y(,t) are some linear function of random errors (ELEM)(,s), for some t and s, or whether the series would more appropriately be described by a nonlinear model such as bilinear, exponential, threshold, etc., (2) formally test whether a statistically significant change has occurred in structure/level either historically or as it occurs, (3) forecast nonlinear system with a new and innovative (but very old numerical) technique utilizing rational functions to extrapolate individual parameters as smooth functions of time which are then combined to obtain the forecast of y and (4) suggest a measure of resilience, i.e. how much perturbation a structure/level can tolerate, whether internal or external to the system, and remain statistically unchanged. Although similar to one-step control, this provides a less rigid way to think about changes affecting social systems.^ Applications consisting of the analysis of some familiar and some simulated series demonstrate the procedure. Empirical results suggest that this state-space or modified augmented Kalman filter may provide interesting ways to identify particular kinds of nonlinearities as they occur in structural change via the state trajectory.^ A computational flow-chart detailing computations and software input and output is provided in the body of the text. IBM Advanced BASIC program listings to accomplish most of the analysis are provided in the appendix. ^
Resumo:
One of the fundamental questions in neuroscience is to understand how encoding of sensory inputs is distributed across neuronal networks in cerebral cortex to influence sensory processing and behavioral performance. The fact that the structure of neuronal networks is organized according to cortical layers raises the possibility that sensory information could be processed differently in distinct layers. The goal of my thesis research is to understand how laminar circuits encode information in their population activity, how the properties of the population code adapt to changes in visual input, and how population coding influences behavioral performance. To this end, we performed a series of novel experiments to investigate how sensory information in the primary visual cortex (V1) emerges across laminar cortical circuits. First, it is commonly known that the amount of information encoded by cortical circuits depends critically on whether or not nearby neurons exhibit correlations. We examined correlated variability in V1 circuits from a laminar-specific perspective and observed that cells in the input layer, which have only local projections, encode incoming stimuli optimally by exhibiting low correlated variability. In contrast, output layers, which send projections to other cortical and subcortical areas, encode information suboptimally by exhibiting large correlations. These results argue that neuronal populations in different cortical layers play different roles in network computations. Secondly, a fundamental feature of cortical neurons is their ability to adapt to changes in incoming stimuli. Understanding how adaptation emerges across cortical layers to influence information processing is vital for understanding efficient sensory coding. We examined the effects of adaptation, on the time-scale of a visual fixation, on network synchronization across laminar circuits. Specific to the superficial layers, we observed an increase in gamma-band (30-80 Hz) synchronization after adaptation that was correlated with an improvement in neuronal orientation discrimination performance. Thus, synchronization enhances sensory coding to optimize network processing across laminar circuits. Finally, we tested the hypothesis that individual neurons and local populations synchronize their activity in real-time to communicate information about incoming stimuli, and that the degree of synchronization influences behavioral performance. These analyses assessed for the first time the relationship between changes in laminar cortical networks involved in stimulus processing and behavioral performance.
Resumo:
Clinical text understanding (CTU) is of interest to health informatics because critical clinical information frequently represented as unconstrained text in electronic health records are extensively used by human experts to guide clinical practice, decision making, and to document delivery of care, but are largely unusable by information systems for queries and computations. Recent initiatives advocating for translational research call for generation of technologies that can integrate structured clinical data with unstructured data, provide a unified interface to all data, and contextualize clinical information for reuse in multidisciplinary and collaborative environment envisioned by CTSA program. This implies that technologies for the processing and interpretation of clinical text should be evaluated not only in terms of their validity and reliability in their intended environment, but also in light of their interoperability, and ability to support information integration and contextualization in a distributed and dynamic environment. This vision adds a new layer of information representation requirements that needs to be accounted for when conceptualizing implementation or acquisition of clinical text processing tools and technologies for multidisciplinary research. On the other hand, electronic health records frequently contain unconstrained clinical text with high variability in use of terms and documentation practices, and without commitmentto grammatical or syntactic structure of the language (e.g. Triage notes, physician and nurse notes, chief complaints, etc). This hinders performance of natural language processing technologies which typically rely heavily on the syntax of language and grammatical structure of the text. This document introduces our method to transform unconstrained clinical text found in electronic health information systems to a formal (computationally understandable) representation that is suitable for querying, integration, contextualization and reuse, and is resilient to the grammatical and syntactic irregularities of the clinical text. We present our design rationale, method, and results of evaluation in processing chief complaints and triage notes from 8 different emergency departments in Houston Texas. At the end, we will discuss significance of our contribution in enabling use of clinical text in a practical bio-surveillance setting.
Resumo:
The present volume gives the observed physical and chemical data obtained by R.V. "Meteor" in the Indian Ocean during cruise 1964/65. The tables are based on the computations made by the National Oceanographic Data Center (NODC) in Washington. In addition to the normally communicated data, the tables contain four chemical parameters: alkalinity, ammonia, fluoride, and calcium.
Resumo:
In February of 1983 a new terrestrial photogrammetric survey of Lewis Glacier (0° 9' S) has been made, from which the present topographic map has been produced in a scale of 1:5000. Simultaneously a survey of 1963 was evaluated giving a basis for computations of area and volume changes over the 20 year period: Lewis Glacier has lost 22 % of its area and 50 % of its volume. Based on maps and field observations of moraines 10 different stages were identified. Changes of area and volume can be determined for the periods after 1890, two older, undated stages are presumed to be of Little Ice Age-origin. Moderate losses from 1890 to 1920 were followed by strong, uninterrupted retreat up to present. In this respect Lewis Glacier behaves as all other equatorial glaciers that were closer examined. Compared to alpine glaciers the development was similar up to 1950. In the following years, however, the glaciers of the Alps gained mass and advanced while Lewis Glacier experienced its strongest losses from 1974 to 1983.
Resumo:
Timing is crucial to understanding the causes and consequences of events in Earth history. The calibration of geological time relies heavily on the accuracy of radioisotopic and astronomical dating. Uncertainties in the computations of Earth's orbital parameters and in radioisotopic dating have hampered the construction of a reliable astronomically calibrated time scale beyond 40 Ma. Attempts to construct a robust astronomically tuned time scale for the early Paleogene by integrating radioisotopic and astronomical dating are only partially consistent. Here, using the new La2010 and La2011 orbital solutions, we present the first accurate astronomically calibrated time scale for the early Paleogene (47-65 Ma) uniquely based on astronomical tuning and thus independent of the radioisotopic determination of the Fish Canyon standard. Comparison with geological data confirms the stability of the new La2011 solution back to ~54 Ma. Subsequent anchoring of floating chronologies to the La2011 solution using the very long eccentricity nodes provides an absolute age of 55.530 {plus minus} 0.05 Ma for the onset of the Paleocene/Eocene Thermal Maximum (PETM), 54.850 {plus minus} 0.05 Ma for the early Eocene ash -17, and 65.250 {plus minus} 0.06 Ma for the K/Pg boundary. The new astrochronology presented here indicates that the intercalibration and synchronization of U/Pb and 40Ar/39Ar radiometric geochronology is much more challenging than previously thought.
Resumo:
One main point of the air electric investigations at the atlantic 1965 and 1969 was the record of the potential gradient in the troposphere with free and captive balloon ascents. The course of the field vs. altitude above the sea differs from that over land. A remarkable enlargement of the field strength occurs at the altitude of the passat inversion. The electric voltage between ionosphere and earth could be obtained by integrating the potential gradient over the altitude. Such computations have been made by balloon ascents simultaneous over the ocean and at Weissenau (South Germany), From 15 simultaneous measurements the average value of the potential of the ionosphere over the ocean is 214 kV and over South Germany 216 kV, that means very close together. Because of the small differences also between the single values it can be concluded that in generally the ionosphere potential has an equal value over these both places at one moment. From the potential of the ionosphere VI, the field strength E0 and the conductivity lamda o, both measured at the sea surface, the columnar resistance R could be derived to 2.4 x 10**17 Ohm x m**2. By correlation of the single values of the ionosphere potential with the potential gradient measured simultaneously at the surface of the sea a linear proportional relationship exists; it follows from this result, that R is nearly constant. The mean value of the air-earth current density over the ocean could be calculated by using the measured values of the small ion density with respect to the electrode effect prooved at the equator station. The current density was only 0.9 x 10**-12 A/m**2, which means, a three and a half times smaller value than estimated by Carnegie and accepted up to now. Therefore it seems to be necessary to correct the former calculations of the global current balance.
Resumo:
The preference utilization ratio, i.e., the share of imports under preferential tariff schemes out of total imports, has been a popular indicator for measuring the usage of preferential tariffs vis-à-vis tariffs on a most-favored-nation basis. A crucial shortcoming of this measure is the data requirements, particularly for import value data classified by tariff schemes, which are not available in most countries. This study proposes an alternative measure for preferential tariff utilization, termed the "tariff exemption ratio." This measure offers the unique advantage of needing only publicly available data, such as those provided by the World Development Indicators, for its computations. We can thus calculate this measure for most countries for an international comparison. Our finding is that tariff exemption ratios differ widely across countries, with a global average of approximately 50%.
Resumo:
Computer Fluid Dynamics tools have already become a valuable instrument for Naval Architects during the ship design process, thanks to their accuracy and the available computer power. Unfortunately, the development of RANSE codes, generally used when viscous effects play a major role in the flow, has not reached a mature stage, being the accuracy of the turbulence models and the free surface representation the most important sources of uncertainty. Another level of uncertainty is added when the simulations are carried out for unsteady flows, as those generally studied in seakeeping and maneuvering analysis and URANS equations solvers are used. Present work shows the applicability and the benefits derived from the use of new approaches for the turbulence modeling (Detached Eddy Simulation) and the free surface representation (Level Set) on the URANS equations solver CFDSHIP-Iowa. Compared to URANS, DES is expected to predict much broader frequency contents and behave better in flows where boundary layer separation plays a major role. Level Set methods are able to capture very complex free surface geometries, including breaking and overturning waves. The performance of these improvements is tested in set of fairly complex flows, generated by a Wigley hull at pure drift motion, with drift angle ranging from 10 to 60 degrees and at several Froude numbers to study the impact of its variation. Quantitative verification and validation are performed with the obtained results to guarantee their accuracy. The results show the capability of the CFDSHIP-Iowa code to carry out time-accurate simulations of complex flows of extreme unsteady ship maneuvers. The Level Set method is able to capture very complex geometries of the free surface and the use of DES in unsteady simulations highly improves the results obtained. Vortical structures and instabilities as a function of the drift angle and Fr are qualitatively identified. Overall analysis of the flow pattern shows a strong correlation between the vortical structures and free surface wave pattern. Karman-like vortex shedding is identified and the scaled St agrees well with the universal St value. Tip vortices are identified and the associated helical instabilities are analyzed. St using the hull length decreases with the increase of the distance along the vortex core (x), which is similar to results from other simulations. However, St scaled using distance along the vortex cores shows strong oscillations compared to almost constants for those previous simulations. The difference may be caused by the effect of the free-surface, grid resolution, and interaction between the tip vortex and other vortical structures, which needs further investigations. This study is exploratory in the sense that finer grids are desirable and experimental data is lacking for large α, especially for the local flow. More recently, high performance computational capability of CFDSHIP-Iowa V4 has been improved such that large scale computations are possible. DES for DTMB 5415 with bilge keels at α = 20º were conducted using three grids with 10M, 48M and 250M points. DES analysis for flows around KVLCC2 at α = 30º is analyzed using a 13M grid and compared with the results of DES on the 1.6M grid by. Both studies are consistent with what was concluded on grid resolution herein since dominant frequencies for shear-layer, Karman-like, horse-shoe and helical instabilities only show marginal variation on grid refinement. The penalties of using coarse grids are smaller frequency amplitude and less resolved TKE. Therefore finer grids should be used to improve V&V for resolving most of the active turbulent scales for all different Fr and α, which hopefully can be compared with additional EFD data for large α when it becomes available.
Resumo:
The technique of Abstract Interpretation has allowed the development of very sophisticated global program analyses which are at the same time provably correct and practical. We present in a tutorial fashion a novel program development framework which uses abstract interpretation as a fundamental tool. The framework uses modular, incremental abstract interpretation to obtain information about the program. This information is used to validate programs, to detect bugs with respect to partial specifications written using assertions (in the program itself and/or in system libraries), to generate and simplify run-time tests, and to perform high-level program transformations such as multiple abstract specialization, parallelization, and resource usage control, all in a provably correct way. In the case of validation and debugging, the assertions can refer to a variety of program points such as procedure entry, procedure exit, points within procedures, or global computations. The system can reason with much richer information than, for example, traditional types. This includes data structure shape (including pointer sharing), bounds on data structure sizes, and other operational variable instantiation properties, as well as procedure-level properties such as determinacy, termination, nonfailure, and bounds on resource consumption (time or space cost). CiaoPP, the preprocessor of the Ciao multi-paradigm programming system, which implements the described functionality, will be used to illustrate the fundamental ideas.
Resumo:
Since the early days of logic programming, researchers in the field realized the potential for exploitation of parallelism present in the execution of logic programs. Their high-level nature, the presence of nondeterminism, and their referential transparency, among other characteristics, make logic programs interesting candidates for obtaining speedups through parallel execution. At the same time, the fact that the typical applications of logic programming frequently involve irregular computations, make heavy use of dynamic data structures with logical variables, and involve search and speculation, makes the techniques used in the corresponding parallelizing compilers and run-time systems potentially interesting even outside the field. The objective of this article is to provide a comprehensive survey of the issues arising in parallel execution of logic programming languages along with the most relevant approaches explored to date in the field. Focus is mostly given to the challenges emerging from the parallel execution of Prolog programs. The article describes the major techniques used for shared memory implementation of Or-parallelism, And-parallelism, and combinations of the two. We also explore some related issues, such as memory management, compile-time analysis, and execution visualization.
Resumo:
We will present calculations of opacities for matter under LTE conditions. Opacities are needed in radiation transport codes to study processes like Inertial Confinement Fusion and plasma amplifiers in X-ray secondary sources. For the calculations we use the code BiGBART, with either a hydrogenic approximation with j-splitting or self-consistent data generated with the atomic physics code FAC. We calculate the atomic structure, oscillator strengths, radiative transition energies, including UTA computations, and photoionization cross-sections. A DCA model determines the configurations considered in the computation of the opacities. The opacities obtained with these two models are compared with experimental measurements.
Resumo:
Upwardpropagation of a premixed flame in averticaltubefilled with a very leanmixture is simulated numerically using a single irreversible Arrhenius reaction model with infinitely high activation energy. In the absence of heat losses and preferential diffusion effects, a curved flame with stationary shape and velocity close to those of an open bubble ascending in the same tube is found for values of the fuel mass fraction above a certain minimum that increases with the radius of the tube, while the numerical computations cease to converge to a stationary solution below this minimum mass fraction. The vortical flow of the gas behind the flame and in its transport region is described for tubes of different radii. It is argued that this flow may become unstable when the fuel mass fraction is decreased, and that this instability, together with the flame stretch due to the strong curvature of the flame tip in narrow tubes, may be responsible for the minimum fuel mass fraction. Radiation losses and a Lewis number of the fuel slightly above unity decrease the final combustion temperature at the flame tip and increase the minimum fuel mass fraction, while a Lewis number slightly below unity has the opposite effect.
Resumo:
Typical streak computations present in the literature correspond to linear streaks or to small amplitude nonlinear streaks computed using DNS or nonlinear PSE. We use the Reduced Navier-Stokes (RNS) equations to compute the streamwise evolution of fully non-linear streaks with high amplitude in a laminar flat plate boundary layer. The RNS formulation provides Reynolds number independent solutions that are asymptotically exact in the limit $Re \gg 1$, it requires much less computational effort than DNS, and it does not have the consistency and convergence problems of the PSE. We present various streak computations to show that the flow configuration changes substantially when the amplitude of the streaks grows and the nonlinear effects come into play. The transversal motion (in the wall normal-streamwise plane) becomes more important and strongly distorts the streamwise velocity profiles, that end up being quite different from those of the linear case. We analyze in detail the resulting flow patterns for the nonlinearly saturated streaks and compare them with available experimental results.
Resumo:
The stability analysis of open cavity flows is a problem of great interest in the aeronautical industry. This type of flow can appear, for example, in landing gears or auxiliary power unit configurations. Open cavity flows is very sensitive to any change in the configuration, either physical (incoming boundary layer, Reynolds or Mach numbers) or geometrical (length to depth and length to width ratio). In this work, we have focused on the effect of geometry and of the Reynolds number on the stability properties of a threedimensional spanwise periodic cavity flow in the incompressible limit. To that end, BiGlobal analysis is used to investigate the instabilities in this configuration. The basic flow is obtained by the numerical integration of the Navier-Stokes equations with laminar boundary layers imposed upstream. The 3D perturbation, assumed to be periodic in the spanwise direction, is obtained as the solution of the global eigenvalue problem. A parametric study has been performed, analyzing the stability of the flow under variation of the Reynolds number, the L/D ratio of the cavity, and the spanwise wavenumber β. For consistency, multidomain high order numerical schemes have been used in all the computations, either basic flow or eigenvalue problems. The results allow to define the neutral curves in the range of L/D = 1 to L/D = 3. A scaling relating the frequency of the eigenmodes and the length to depth ratio is provided, based on the analysis results.