916 resultados para modeling and prediction


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article characterizes key weaknesses in the ability of current digital libraries to support scholarly inquiry, and as a way to address these, proposes computational services grounded in semiformal models of the naturalistic argumentation commonly found in research literatures. It is argued that a design priority is to balance formal expressiveness with usability, making it critical to coevolve the modeling scheme with appropriate user interfaces for argument construction and analysis. We specify the requirements for an argument modeling scheme for use by untrained researchers and describe the resulting ontology, contrasting it with other domain modeling and semantic web approaches, before discussing passive and intelligent user interfaces designed to support analysts in the construction, navigation, and analysis of scholarly argument structures in a Web-based environment. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 17–47, 2007.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern high-power, pulsed lasers are driven by strong intracavity fluctuations. Critical in driving the intracavity dynamics is the nontrivial phase profiles generated and their periodic modification from either nonlinear mode-coupling, spectral filtering or dispersion management. Understanding the theoretical origins of the intracavity fluctuations helps guide the design, optimization and construction of efficient, high-power and high-energy pulsed laser cavities. Three specific mode-locking component are presented for enhancing laser energy: waveguide arrays, spectral filtering and dispersion management. Each component drives a strong intracavity dynamics that is captured through various modeling and analytic techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pavement performance is one of the most important components of the pavement management system. Prediction of the future performance of a pavement section is important in programming maintenance and rehabilitation needs. Models for predicting pavement performance have been developed on the basis of traffic and age. The purpose of this research is to extend the use of a relatively new approach to performance prediction in pavement performance modeling using adaptive logic networks (ALN). Adaptive logic networks have recently emerged as an effective alternative to artificial neural networks for machine learning tasks. ^ The ALN predictive methodology is applicable to a wide variety of contexts including prediction of roughness based indices, composite rating indices and/or individual pavement distresses. The ALN program requires key information about a pavement section, including the current distress indexes, pavement age, climate region, traffic and other variables to predict yearly performance values into the future. ^ This research investigates the effect of different learning rates of the ALN in pavement performance modeling. It can be used at both the network and project level for predicting the long term performance of a road network. Results indicate that the ALN approach is well suited for pavement performance prediction modeling and shows a significant improvement over the results obtained from other artificial intelligence approaches. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent advances in airborne Light Detection and Ranging (LIDAR) technology allow rapid and inexpensive measurements of topography over large areas. Airborne LIDAR systems usually return a 3-dimensional cloud of point measurements from reflective objects scanned by the laser beneath the flight path. This technology is becoming a primary method for extracting information of different kinds of geometrical objects, such as high-resolution digital terrain models (DTMs), buildings and trees, etc. In the past decade, LIDAR gets more and more interest from researchers in the field of remote sensing and GIS. Compared to the traditional data sources, such as aerial photography and satellite images, LIDAR measurements are not influenced by sun shadow and relief displacement. However, voluminous data pose a new challenge for automated extraction the geometrical information from LIDAR measurements because many raster image processing techniques cannot be directly applied to irregularly spaced LIDAR points. ^ In this dissertation, a framework is proposed to filter out information about different kinds of geometrical objects, such as terrain and buildings from LIDAR automatically. They are essential to numerous applications such as flood modeling, landslide prediction and hurricane animation. The framework consists of several intuitive algorithms. Firstly, a progressive morphological filter was developed to detect non-ground LIDAR measurements. By gradually increasing the window size and elevation difference threshold of the filter, the measurements of vehicles, vegetation, and buildings are removed, while ground data are preserved. Then, building measurements are identified from no-ground measurements using a region growing algorithm based on the plane-fitting technique. Raw footprints for segmented building measurements are derived by connecting boundary points and are further simplified and adjusted by several proposed operations to remove noise, which is caused by irregularly spaced LIDAR measurements. To reconstruct 3D building models, the raw 2D topology of each building is first extracted and then further adjusted. Since the adjusting operations for simple building models do not work well on 2D topology, 2D snake algorithm is proposed to adjust 2D topology. The 2D snake algorithm consists of newly defined energy functions for topology adjusting and a linear algorithm to find the minimal energy value of 2D snake problems. Data sets from urbanized areas including large institutional, commercial, and small residential buildings were employed to test the proposed framework. The results demonstrated that the proposed framework achieves a very good performance. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to relative ground movement, buried pipelines experience geotechnical loads. The imposed geotechnical loads may initiate pipeline deformations that affect system serviceability and integrity. Engineering guidelines (e.g., ALA, 2005; Honegger and Nyman, 2001) provide the technical framework to develop idealized structural models to analyze pipe‒soil interaction events and assess pipe mechanical response. The soil behavior is modeled using discrete springs that represent the geotechnical loads per unit pipe length developed during the interaction event. Soil forces are defined along three orthogonal directions (i.e., axial, lateral and vertical) to analyze the response of pipelines. Nonlinear load-displacement relationships of soil defined by a spring, is independent of neighboring spring elements. However, recent experimental and numerical studies demonstrate significant coupling effects during oblique (i.e., not along one of the orthogonal axes) pipe‒soil interaction events. In the present study, physical modeling using a geotechnical centrifuge was conducted to improve the current understanding of soil load coupling effects of buried pipes in loose and dense sand. A section of pipeline, at shallow burial depth, was translated through the soil at different oblique angles in the axial-lateral plane. The force exerted by the soil on pipe is critically examined to assess the significance of load coupling effects and establish a yield envelope. The displacements required to soil yield force are also examined to assess potential coupling in mobilization distance. A set of laboratory tests were conducted on the sand used for centrifuge modeling to find the stress-strain behavior of sand, which was used to examine the possible mechanisms of centrifuge model test. The yield envelope, deformation patterns, and interpreted failure mechanisms obtained from centrifuge modeling are compared with other physical modeling and numerical simulations available in the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Instrument transformers serve an important role in the protection and isolation of AC electrical systems for measurements of different electrical parameters like voltage, current, power factor, frequency, and energy. As suggested by name these transformers are used in connection with suitable measuring instruments like an ammeter, wattmeter, voltmeter, and energy meters. We have seen how higher voltages and currents are transformed into lower magnitudes to provide isolation between power networks, relays, and other instruments. Reducing transient, suppressing electrical noises in sensitive devices, standardization of instruments and relays up to a few volts and current. Transformer performance directly affects the accuracy of power system measurements and the reliability of relay protection. We classified transformers in terms of purpose, insulating medium, Voltage ranges, temperature ranges, humidity or environmental effect, indoor and outdoor use, performance, Features, specification, efficiency, cost analysis, application, benefits, and limitations which enabled us to comprehend their correct use and selection criteria based on our desired requirements. We also discussed modern Low power instrument transformer products that are recently launched or offered by renowned companies like Schneider Electric, Siemens, ABB, ZIV, G&W etc. These new products are innovations and problem solvers in the domain of measurement, protection, digital communication, advance, and commercial energy metering. Since there is always some space for improvements to explore new advantages of Low power instrument transformers in the domain of their wide linearity, high-frequency range, miniaturization, structural and technological modification, integration, smart frequency modeling, and output prediction of low-power voltage transformers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Converting aeroelastic vibrations into electricity for low power generation has received growing attention over the past few years. In addition to potential applications for aerospace structures, the goal is to develop alternative and scalable configurations for wind energy harvesting to use in wireless electronic systems. This paper presents modeling and experiments of aeroelastic energy harvesting using piezoelectric transduction with a focus on exploiting combined nonlinearities. An airfoil with plunge and pitch degrees of freedom (DOF) is investigated. Piezoelectric coupling is introduced to the plunge DOF while nonlinearities are introduced through the pitch DOF. A state-space model is presented and employed for the simulations of the piezoaeroelastic generator. A two-state approximation to Theodorsen aerodynamics is used in order to determine the unsteady aerodynamic loads. Three case studies are presented. First the interaction between piezoelectric power generation and linear aeroelastic behavior of a typical section is investigated for a set of resistive loads. Model predictions are compared to experimental data obtained from the wind tunnel tests at the flutter boundary. In the second case study, free play nonlinearity is added to the pitch DOF and it is shown that nonlinear limit-cycle oscillations can be obtained not only above but also below the linear flutter speed. The experimental results are successfully predicted by the model simulations. Finally, the combination of cubic hardening stiffness and free play nonlinearities is considered in the pitch DOF. The nonlinear piezoaeroelastic response is investigated for different values of the nonlinear-to-linear stiffness ratio. The free play nonlinearity reduces the cut-in speed while the hardening stiffness helps in obtaining persistent oscillations of acceptable amplitude over a wider range of airflow speeds. Such nonlinearities can be introduced to aeroelastic energy harvesters (exploiting piezoelectric or other transduction mechanisms) for performance enhancement.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The knowledge of the relationship between spatial variability of the surface soil water content (theta) and its mean across a spatial domain (theta(m)) is crucial for hydrological modeling and understanding soil water dynamics at different scales. With the aim to compare the soil moisture dynamics and variability between the two land uses and to explore the relationship between the spatial variability of theta and theta(m), this study analyzed sets of surface theta measurements performed with an impedance soil moisture probe, collected 136 times during a period of one year in two transects covering different land uses, i.e., korshinsk peashrub transect (KPT) and bunge needlegrass transect (BNT), in a watershed of the Loess Plateau, China. Results showed that the temporal pattern of theta behaved similarly for the two land uses, with both relative wetter soils during wet period and relative drier soils during dry period recognized in BNT. Soil moisture tended to be temporally stable among different dates, and more stable patterns could be observed for dates with more similar soil water conditions. The magnitude of the spatial variation of theta in KPT was greater than that in ENT. For both land uses, the standard deviation (SD) of theta in general increased as theta(m) increased, a behavior that could be well described with a natural logarithmic function. Convex relationship of CV and theta(m) and the maximum CV for both land uses (43.5% in KPT and 41.0% in BNT) can, therefore, be ascertained. Geostatistical analysis showed that the range in KPT (9.1 m) was shorter than that in BNT (15.1 m). The nugget effects, the structured variability, hence the total variability increased as theta(m) increased. For both land uses, the spatial dependency in general increased with increasing theta(m). 2011 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Chlorpheniramine maleate (CLOR) enantiomers were quantified by ultraviolet spectroscopy and partial least squares regression. The CLOR enantiomers were prepared as inclusion complexes with beta-cyclodextrin and 1-butanol with mole fractions in the range from 50 to 100%. For the multivariate calibration the outliers were detected and excluded and variable selection was performed by interval partial least squares and a genetic algorithm. Figures of merit showed results for accuracy of 3.63 and 2.83% (S)-CLOR for root mean square errors of calibration and prediction, respectively. The ellipse confidence region included the point for the intercept and the slope of 1 and 0, respectively. Precision and analytical sensitivity were 0.57 and 0.50% (S)-CLOR, respectively. The sensitivity, selectivity, adjustment, and signal-to-noise ratio were also determined. The model was validated by a paired t test with the results obtained by high-performance liquid chromatography proposed by the European pharmacopoeia and circular dichroism spectroscopy. The results showed there was no significant difference between the methods at the 95% confidence level, indicating that the proposed method can be used as an alternative to standard procedures for chiral analysis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Using CD and 2D H-1 NMR spectroscopy, we have identified potential initiation sites for the folding of T4 lysozyme by examining the conformational preferences of peptide fragments corresponding to regions of secondary structure. CD spectropolarimetry showed most peptides were unstructured in water, but adopted partial helical conformations in TFE and SDS solution. This was also consistent with the H-1 NMR data which showed that the peptides were predominantly disordered in water, although in some cases, nascent or small populations of partially folded conformations could be detected. NOE patterns, coupling constants, and deviations from random coil Her chemical shift values complemented the CD data and confirmed that many of the peptides were helical in TFE and SDS micelles. In particular, the peptide corresponding to helix E in the native enzyme formed a well-defined helix in both TFE and SDS, indicating that helix E potentially forms an initiation site for T4 lysozyme folding. The data for the other peptides indicated that helices D, F, G, and H are dependent on tertiary interactions for their folding and/or stability. Overall, the results from this study, and those of our earlier studies, are in agreement with modeling and IID-deuterium exchange experiments, and support an hierarchical model of folding for T4 lysozyme.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The majority of past and current individual-tree growth modelling methodologies have failed to characterise and incorporate structured stochastic components. Rather, they have relied on deterministic predictions or have added an unstructured random component to predictions. In particular, spatial stochastic structure has been neglected, despite being present in most applications of individual-tree growth models. Spatial stochastic structure (also called spatial dependence or spatial autocorrelation) eventuates when spatial influences such as competition and micro-site effects are not fully captured in models. Temporal stochastic structure (also called temporal dependence or temporal autocorrelation) eventuates when a sequence of measurements is taken on an individual-tree over time, and variables explaining temporal variation in these measurements are not included in the model. Nested stochastic structure eventuates when measurements are combined across sampling units and differences among the sampling units are not fully captured in the model. This review examines spatial, temporal, and nested stochastic structure and instances where each has been characterised in the forest biometry and statistical literature. Methodologies for incorporating stochastic structure in growth model estimation and prediction are described. Benefits from incorporation of stochastic structure include valid statistical inference, improved estimation efficiency, and more realistic and theoretically sound predictions. It is proposed in this review that individual-tree modelling methodologies need to characterise and include structured stochasticity. Possibilities for future research are discussed. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

High-pressure homogenization is a key unit operation used to disrupt cells containing intracellular bioproducts. Modeling and optimization of this unit are restrained by a lack of information on the flow conditions within a homogenizer value. A numerical investigation of the impinging radial jet within a homogenizer value is presented. Results for a laminar and turbulent (k-epsilon turbulent model) jet are obtained using the PHOENICS finite-volume code. Experimental measurement of the stagnation region width and correlation of the cell disruption efficiency with jet stagnation pressure both indicate that the impinging jet in the homogenizer system examined is likely to be laminar under normal operating conditions. Correlation of disruption data with laminar stagnation pressure provides a better description of experimental variability than existing correlations using total pressure drop or the grouping 1/Y(2)h(2).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background We validated a strategy for diagnosis of coronary artery disease ( CAD) and prediction of cardiac events in high-risk renal transplant candidates ( at least one of the following: age >= 50 years, diabetes, cardiovascular disease). Methods A diagnosis and risk assessment strategy was used in 228 renal transplant candidates to validate an algorithm. Patients underwent dipyridamole myocardial stress testing and coronary angiography and were followed up until death, renal transplantation, or cardiac events. Results The prevalence of CAD was 47%. Stress testing did not detect significant CAD in 1/3 of patients. The sensitivity, specificity, and positive and negative predictive values of the stress test for detecting CAD were 70, 74, 69, and 71%, respectively. CAD, defined by angiography, was associated with increased probability of cardiac events [log-rank: 0.001; hazard ratio: 1.90, 95% confidence interval (CI): 1.29-2.92]. Diabetes (P=0.03; hazard ratio: 1.58, 95% CI: 1.06-2.45) and angiographically defined CAD (P=0.03; hazard ratio: 1.69, 95% CI: 1.08-2.78) were the independent predictors of events. Conclusion The results validate our observations in a smaller number of high-risk transplant candidates and indicate that stress testing is not appropriate for the diagnosis of CAD or prediction of cardiac events in this group of patients. Coronary angiography was correlated with events but, because less than 50% of patients had significant disease, it seems premature to recommend the test to all high-risk renal transplant candidates. The results suggest that angiography is necessary in many high-risk renal transplant candidates and that better noninvasive methods are still lacking to identify with precision patients who will benefit from invasive procedures. Coron Artery Dis 21: 164-167 (C) 2010 Wolters Kluwer Health vertical bar Lippincott Williams & Wilkins.