931 resultados para PERFORMANCE PREDICTION


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The focus of this thesis is to discuss the development and modeling of an interface architecture to be employed for interfacing analog signals in mixed-signal SOC. We claim that the approach that is going to be presented is able to achieve wide frequency range, and covers a large range of applications with constant performance, allied to digital configuration compatibility. Our primary assumptions are to use a fixed analog block and to promote application configurability in the digital domain, which leads to a mixed-signal interface. The use of a fixed analog block avoids the performance loss common to configurable analog blocks. The usage of configurability on the digital domain makes possible the use of all existing tools for high level design, simulation and synthesis to implement the target application, with very good performance prediction. The proposed approach utilizes the concept of frequency translation (mixing) of the input signal followed by its conversion to the ΣΔ domain, which makes possible the use of a fairly constant analog block, and also, a uniform treatment of input signal from DC to high frequencies. The programmability is performed in the ΣΔ digital domain where performance can be closely achieved according to application specification. The interface performance theoretical and simulation model are developed for design space exploration and for physical design support. Two prototypes are built and characterized to validate the proposed model and to implement some application examples. The usage of this interface as a multi-band parametric ADC and as a two channels analog multiplier and adder are shown. The multi-channel analog interface architecture is also presented. The characterization measurements support the main advantages of the approach proposed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A total of 1,800 incubating eggs produced by a commercial flock of Cobb broiler breeders was used to determine the effects of storage duration (3 or 18 d) on spread of hatch and chick quality. Chick relative growth (RG) at the end of 7 d of rearing was also determined as a measure of the chick performance. Chick quality was defined to encompass several qualitative characteristics and scored according to their importance. Eggs stored for 3 d hatched earlier than those stored for 18 d (P < 0.05). Hatching was normally distributed in both categories of eggs, and the spread of hatch was not affected by storage time (P = 0.69). Storage duration of 18 d reduced the percentage of day-old chick with high quality as well as average chick quality score (P < 0.05). RG varied with length of egg storage, quality of day-old chick, and the incubation duration (P < 0.05). Eighteen-day storage of eggs not only resulted in longer incubation duration and lower quality score but also depressed RG. Chick quality as defined in this study was correlated to RG and storage time. It was concluded that day-old chick quality may be a relatively good indicator of broiler performance. The results suggest however that in order to improve performance prediction power of chick quality, it would be better to define it as a combination of several qualitative aspects of the day-old chick and the juvenile growth to 7 d.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The running velocities associated to lactate minimum (V-lm), heart rate deflection (V-HRd), critical velocity (CV), 3000 M (V-3000) and 10000 m performance (V-10km) were compared. Additionally the ability of V-lm and VHRd on identifying sustainable velocities was investigated.Methods. Twenty runners (28.5 +/- 5.9 y) performed 1) 3000 m running test for V3000; 2) an all-out 500 in sprint followed by 6x800 m incremental bouts with blood lactate ([lac]) measurements for V-lm; 3) a continuous velocity-incremented test with heart rate measurements at each 200 m for V-HRd; 4) participants attempted to 30 min of endurance test both at V-lm(ETVlm) and V-HRd(ETVHRd). Additionally, the distance-time and velocity-1/time relationships produced CV by 2 (500 m and 3000 m) or 3 predictive trials (500 m, 3000 m and distance reached before exhaustion during ETVHRd), and a 10 km race was recorded for V-10km.Results. The CV identified by different methods did not differ to each other. The results (m(.)min(-1)) revealed that V-.(lm) (281 +/- 14.8)< CV (292.1 +/- 17.5)=V-10km (291.7 +/- 19.3)< V-HRd (300.8 +/- 18.7)=V-3000 (304 +/- 17.5) with high correlation among parameters (P < 0.001). During ETVlm participants completed 30 min of running while on the ETVHRd they lasted only 12.5 +/- 8.2 min with increasing [lac].Conclusion. We evidenced that CV and Vim track-protocols are valid for running evaluation and performance prediction and the parameters studied have different significance. The V-lm reflects the moderate-high intensity domain (below CV), can be sustained without [lac] accumulation and may be used for long-term exercise while the V-HRd overestimates a running intensity that can be sustained for long-time. Additionally, V-3000 and V-HRd reflect the severe intensity domain (above CV).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Bit performance prediction has been a challenging problem for the petroleum industry. It is essential in cost reduction associated with well planning and drilling performance prediction, especially when rigs leasing rates tend to follow the projects-demand and barrel-price rises. A methodology to model and predict one of the drilling bit performance evaluator, the Rate of Penetration (ROP), is presented herein. As the parameters affecting the ROP are complex and their relationship not easily modeled, the application of a Neural Network is suggested. In the present work, a dynamic neural network, based on the Auto-Regressive with Extra Input Signals model, or ARX model, is used to approach the ROP modeling problem. The network was applied to a real oil offshore field data set, consisted of information from seven wells drilled with an equal-diameter bit.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Anaerobic efforts are commonly required through repeated sprint during efforts in many sports, making the anaerobic pathway a target of training. Nevertheless, to identify improvements on such energetic way it is necessary to assess anaerobic capacity or power, which is usually complex. For this purpose, authors have postulated the use of short running performances to anaerobic ability assessment. Thus, the aim of this study was to find a relationship between running performances on anaerobic power, anaerobic capacity or repeated sprint ability. Methods Thirteen military performed maximal running of 50 (P50), 100 (P100) and 300 (P300) m on track, beyond of running-based anaerobic sprint test (RAST; RSA and anaerobic power test), maximal anaerobic running test (MART; RSA and anaerobic capacity test) and the W′ from critical power model (anaerobic capacity test). Results By RAST variables, peak and average power (absolute and relative) and maximum velocity were significantly correlated with P50 (r = −0.68, p = 0.03 and −0.76, p = 0.01; −0.83, p < 0.01 and −0.83, p < 0.01; and −0.78, p < 0.01), respectively. The maximum intensity of MART was negatively and significantly correlated with P100 (r = −0.59) and W′ was not statistically correlated with any of the performances. Conclusion MART and W′ were not correlated with short running performances, having a weak performance predicting probably due to its longer duration in relation to assessed performances. Observing RAST outcomes, we postulated that such a protocol can be used during daily training as short running performance predictor.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The objective of the Ph.D. thesis is to put the basis of an all-embracing link analysis procedure that may form a general reference scheme for the future state-of-the-art of RF/microwave link design: it is basically meant as a circuit-level simulation of an entire radio link, with – generally multiple – transmitting and receiving antennas examined by EM analysis. In this way the influence of mutual couplings on the frequency-dependent near-field and far-field performance of each element is fully accounted for. The set of transmitters is treated as a unique nonlinear system loaded by the multiport antenna, and is analyzed by nonlinear circuit techniques. In order to establish the connection between transmitters and receivers, the far-fields incident onto the receivers are evaluated by EM analysis and are combined by extending an available Ray Tracing technique to the link study. EM theory is used to describe the receiving array as a linear active multiport network. Link performances in terms of bit error rate (BER) are eventually verified a posteriori by a fast system-level algorithm. In order to validate the proposed approach, four heterogeneous application contexts are provided. A complete MIMO link design in a realistic propagation scenario is meant to constitute the reference case study. The second one regards the design, optimization and testing of various typologies of rectennas for power generation by common RF sources. Finally, the project and implementation of two typologies of radio identification tags, at X-band and V-band respectively. In all the cases the importance of an exhaustive nonlinear/electromagnetic co-simulation and co-design is demonstrated to be essential for any accurate system performance prediction.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Flow features inside centrifugal compressor stages are very complicated to simulate with numerical tools due to the highly complex geometry and varying gas conditions all across the machine. For this reason, a big effort is currently being made to increase the fidelity of the numerical models during the design and validation phases. Computational Fluid Dynamics (CFD) plays an increasing role in the assessment of the performance prediction of centrifugal compressor stages. Historically, CFD was considered reliable for performance prediction on a qualitatively level, whereas tests were necessary to predict compressors performance on a quantitatively basis. In fact "standard" CFD with only the flow-path and blades included into the computational domain is known to be weak in capturing efficiency level and operating range accurately due to the under-estimation of losses and the lack of secondary flows modeling. This research project aims to fill the gap in accuracy between "standard" CFD and tests data by including a high fidelity reproduction of the gas domain and the use of advanced numerical models and tools introduced in the author's OEM in-house CFD code. In other words, this thesis describes a methodology by which virtual tests can be conducted on single stages and multistage centrifugal compressors in a similar fashion to a typical rig test that guarantee end users to operate machines with a confidence level not achievable before. Furthermore, the new "high fidelity" approach allowed understanding flow phenomena not fully captured before, increasing aerodynamicists capability and confidence in designing high efficiency and high reliable centrifugal compressor stages.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study focuses on a specific engine, i.e., a dual-spool, separate-flow turbofan engine with an Interstage Turbine Burner (ITB). This conventional turbofan engine has been modified to include a secondary isobaric burner, i.e., ITB, in a transition duct between the high-pressure turbine and the low-pressure turbine. The preliminary design phase for this modified engine starts with the aerothermodynamics cycle analysis is consisting of parametric (i.e., on-design) and performance (i.e., off-design) cycle analyses. In parametric analysis, the modified engine performance parameters are evaluated and compared with baseline engine in terms of design limitation (maximum turbine inlet temperature), flight conditions (such as flight Mach condition, ambient temperature and pressure), and design choices (such as compressor pressure ratio, fan pressure ratio, fan bypass ratio etc.). A turbine cooling model is also included to account for the effect of cooling air on engine performance. The results from the on-design analysis confirmed the advantage of using ITB, i.e., higher specific thrust with small increases in thrust specific fuel consumption, less cooling air, and less NOx production, provided that the main burner exit temperature and ITB exit temperature are properly specified. It is also important to identify the critical ITB temperature, beyond which the ITB is turned off and has no advantage at all. With the encouraging results from parametric cycle analysis, a detailed performance cycle analysis of the identical engine is also conducted for steady-stateengine performance prediction. The results from off-design cycle analysis show that the ITB engine at full throttle setting has enhanced performance over baseline engine. Furthermore, ITB engine operating at partial throttle settings will exhibit higher thrust at lower specific fuel consumption and improved thermal efficiency over the baseline engine. A mission analysis is also presented to predict the fuel consumptions in certain mission phases. Excel macrocode, Visual Basic for Application, and Excel neuron cells are combined to facilitate Excel software to perform these cycle analyses. These user-friendly programs compute and plot the data sequentially without forcing users to open other types of post-processing programs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An approximate analytic model of a shared memory multiprocessor with a Cache Only Memory Architecture (COMA), the busbased Data Difussion Machine (DDM), is presented and validated. It describes the timing and interference in the system as a function of the hardware, the protocols, the topology and the workload. Model results have been compared to results from an independent simulator. The comparison shows good model accuracy specially for non-saturated systems, where the errors in response times and device utilizations are independent of the number of processors and remain below 10% in 90% of the simulations. Therefore, the model can be used as an average performance prediction tool that avoids expensive simulations in the design of systems with many processors.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A solar cell is a solid state device that converts the energy of sunlight directly into electricity by the photovoltaic effect. When light with photon energies greater than the band gap is absorbed by a semiconductor material, free electrons and free holes are generated by optical excitation in the material. The main characteristic of a photovoltaic device is the presence of internal electric field able to separate the free electrons and holes so they can pass out of the material to the external circuit before they recombine. Numerical simulation of photovoltaic devices plays a crucial role in their design, performance prediction, and comprehension of the fundamental phenomena ruling their operation. The electrical transport and the optical behavior of the solar cells discussed in this work were studied with the simulation code D-AMPS-1D. This software is an updated version of the one-dimensional (1D) simulation program Analysis of Microelectronic and Photonic Devices (AMPS) that was initially developed at The Penn State University, USA. Structures such as homojunctions, heterojunctions, multijunctions, etc., resulting from stacking layers of different materials can be studied by appropriately selecting characteristic parameters. In this work, examples of cells simulation made with D-AMPS-1D are shown. Particularly, results of Ge photovoltaic devices are presented. The role of the InGaP buffer on the device was studied. Moreover, a comparison of the simulated electrical parameters with experimental results was performed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The PVCROPS project (PhotoVolta ic Cost r€duction, Reliability, Operational performance, Prediction and Simulation), cofinanced by European Commission in the frame of Seventh Framework Programme, has compiled in the “Good and bad practices: Manual to improve the quality and reduce the cost of PV systems” a collection of good and bad practices in actual PV plants . All the situations it collects represent the state-of-the-art of existing PV installations all around Europe. They show how the different parts of an installation can be implem ented properly or not. The aim of this manual is to represent a reference text which can help any PV actor (installers, electricians, maintenance operators, owners, etc.) not only to check and improve an already existing installation but will also, and mainly, avoid the previously known bad practices for the construction of a new PV installation. Thus, solving a priori the known errors, new PV installations will be more reliable, efficient and cost-effective and can recover the initial investment in a shorter time. The manual is going to be free available in the PVCROPS website in several languages.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Performance prediction models for partial face mechanical excavators, when developed in laboratory conditions, depend on relating the results of a set of rock property tests and indices to specific cutting energy (SE) for various rock types. There exist some studies in the literature aiming to correlate the geotechnical properties of intact rocks with the SE, especially for massive and widely jointed rock environments. However, those including direct and/or indirect measures of rock fracture parameters such as rock brittleness and fracture toughness, along with the other rock parameters expressing different aspects of rock behavior under drag tools (picks), are rather limited. With this study, it was aimed to investigate the relationships between the indirect measures of rock brittleness and fracture toughness and the SE depending on the results of a new and two previous linear rock cutting programmes. Relationships between the SE, rock strength parameters, and the rock index tests have also been investigated in this study. Sandstone samples taken from the different fields around Ankara, Turkey were used in the new testing programme. Detailed mineralogical analyses, petrographic studies, and rock mechanics and rock cutting tests were performed on these selected sandstone specimens. The assessment of rock cuttability was based on the SE. Three different brittleness indices (B1, B2, and B4) were calculated for sandstones samples, whereas a toughness index (T-i), being developed by Atkinson et al.(1), was employed to represent the indirect rock fracture toughness. The relationships between the SE and the large amounts of new data obtained from the mineralogical analyses, petrographic studies, rock mechanics, and linear rock cutting tests were evaluated by using bivariate correlation and curve fitting techniques, variance analysis, and Student's t-test. Rock cutting and rock property testing data that came from well-known studies of McFeat-Smith and Fowell(2) and Roxborough and Philips(3) have also been employed in statistical analyses together with the new data. Laboratory tests and subsequent analyses revealed that there were close correlations between the SE and B4 whereas no statistically significant correlation has been found between the SE and T-i. Uniaxial compressive and Brazilian tensile strengths and Shore scleroscope hardness of sandstones also exhibited strong relationships with the SE. NCB cone indenter test had the greatest influence on the SE among the other engineering properties of rocks, confirming the previous studies in rock cutting and mechanical excavation. Therefore, it was recommended to employ easy-to-use index tests of NCB cone indenter and Shore scleroscope in the estimation of laboratory SE of sandstones ranging from very low to high strengths in the absence of a rock cutting rig to measure it until the easy-to-use universal measures of the rock brittleness and especially the rock fracture toughness, being an intrinsic rock property, are developed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pavement analysis and design for fatigue cracking involves a number of practical problems like material assessment/screening and performance prediction. A mechanics-aided method can answer these questions with satisfactory accuracy in a convenient way when it is appropriately implemented. This paper presents two techniques to implement the pseudo J-integral based Paris’ law to evaluate and predict fatigue cracking in asphalt mixtures and pavements. The first technique, quasi-elastic simulation, provides a rational and appropriate reference modulus for the pseudo analysis (i.e., viscoelastic to elastic conversion) by making use of the widely used material property: dynamic modulus. The physical significance of the quasi-elastic simulation is clarified. Introduction of this technique facilitates the implementation of the fracture mechanics models as well as continuum damage mechanics models to characterize fatigue cracking in asphalt pavements. The second technique about modeling fracture coefficients of the pseudo J-integral based Paris’ law simplifies the prediction of fatigue cracking without performing fatigue tests. The developed prediction models for the fracture coefficients rely on readily available mixture design properties that directly affect the fatigue performance, including the relaxation modulus, air void content, asphalt binder content, and aggregate gradation. Sufficient data are collected to develop such prediction models and the R2 values are around 0.9. The presented case studies serve as examples to illustrate how the pseudo J-integral based Paris’ law predicts fatigue resistance of asphalt mixtures and assesses fatigue performance of asphalt pavements. Future applications include the estimation of fatigue life of asphalt mixtures/pavements through a distinct criterion that defines fatigue failure by its physical significance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This dissertation presents and evaluates a methodology for scheduling medical application workloads in virtualized computing environments. Such environments are being widely adopted by providers of "cloud computing" services. In the context of provisioning resources for medical applications, such environments allow users to deploy applications on distributed computing resources while keeping their data secure. Furthermore, higher level services that further abstract the infrastructure-related issues can be built on top of such infrastructures. For example, a medical imaging service can allow medical professionals to process their data in the cloud, easing them from the burden of having to deploy and manage these resources themselves. In this work, we focus on issues related to scheduling scientific workloads on virtualized environments. We build upon the knowledge base of traditional parallel job scheduling to address the specific case of medical applications while harnessing the benefits afforded by virtualization technology. To this end, we provide the following contributions: (1) An in-depth analysis of the execution characteristics of the target applications when run in virtualized environments. (2) A performance prediction methodology applicable to the target environment. (3) A scheduling algorithm that harnesses application knowledge and virtualization-related benefits to provide strong scheduling performance and quality of service guarantees. In the process of addressing these pertinent issues for our target user base (i.e. medical professionals and researchers), we provide insight that benefits a large community of scientific application users in industry and academia. Our execution time prediction and scheduling methodologies are implemented and evaluated on a real system running popular scientific applications. We find that we are able to predict the execution time of a number of these applications with an average error of 15%. Our scheduling methodology, which is tested with medical image processing workloads, is compared to that of two baseline scheduling solutions and we find that it outperforms them in terms of both the number of jobs processed and resource utilization by 20–30%, without violating any deadlines. We conclude that our solution is a viable approach to supporting the computational needs of medical users, even if the cloud computing paradigm is not widely adopted in its current form.