857 resultados para importance performance analysis
Resumo:
Discute-se o potencial prognóstico de índices de instabilidade para eventos convectivos de verão na Região Metropolitana de São Paulo. Cinco dos oito dias do período analisado foram considerados chuvosos, com observação de tempestades a partir do meio da tarde. O Índice K (IK) obteve valores abaixo de 31 nos 5 eventos, afetado pela presença de uma camada fria e seca em níveis médios da atmosfera em relação aos baixos níveis. O Índice Total Totals (ITT) falhou na detecção de severidade em 3 dos 5 eventos, apresentando valores inferiores ao mínimo limiar tabelado para fenômenos convectivos (ITT < 44) nesses dias. O Índice Levantado (IL) variou entre -4.9 e -4.3 em todos os 5 casos, valores associados a instabilidade moderada. O Índice de Showalter (IS) indicou possibilidade de tempestades severas em 4 dos 5 casos. Tanto o IS como o CAPE Tv tiveram seus valores fortemente reduzidos em uma sondagem com camada isotérmica entre 910 e 840 hPa. As séries temporais de CAPE Tv e IL mostraram significativa concordância de fase, com alta correlação linear entre ambas. CINE Tv ≈ 0 J kg-1 em associação com baixo cisalhamento vertical e com IS, IL e CAPE Tv, pelo menos moderados, parecem ser fatores comuns em dias de verão com chuvas abundantes e pequena influência da dinâmica de grande escala na área de estudo.
Resumo:
Large scale wireless adhoc networks of computers, sensors, PDAs etc. (i.e. nodes) are revolutionizing connectivity and leading to a paradigm shift from centralized systems to highly distributed and dynamic environments. An example of adhoc networks are sensor networks, which are usually composed by small units able to sense and transmit to a sink elementary data which are successively processed by an external machine. Recent improvements in the memory and computational power of sensors, together with the reduction of energy consumptions, are rapidly changing the potential of such systems, moving the attention towards datacentric sensor networks. A plethora of routing and data management algorithms have been proposed for the network path discovery ranging from broadcasting/floodingbased approaches to those using global positioning systems (GPS). We studied WGrid, a novel decentralized infrastructure that organizes wireless devices in an adhoc manner, where each node has one or more virtual coordinates through which both message routing and data management occur without reliance on either flooding/broadcasting operations or GPS. The resulting adhoc network does not suffer from the deadend problem, which happens in geographicbased routing when a node is unable to locate a neighbor closer to the destination than itself. WGrid allow multidimensional data management capability since nodes' virtual coordinates can act as a distributed database without needing neither special implementation or reorganization. Any kind of data (both single and multidimensional) can be distributed, stored and managed. We will show how a location service can be easily implemented so that any search is reduced to a simple query, like for any other data type. WGrid has then been extended by adopting a replication methodology. We called the resulting algorithm WRGrid. Just like WGrid, WRGrid acts as a distributed database without needing neither special implementation nor reorganization and any kind of data can be distributed, stored and managed. We have evaluated the benefits of replication on data management, finding out, from experimental results, that it can halve the average number of hops in the network. The direct consequence of this fact are a significant improvement on energy consumption and a workload balancing among sensors (number of messages routed by each node). Finally, thanks to the replications, whose number can be arbitrarily chosen, the resulting sensor network can face sensors disconnections/connections, due to failures of sensors, without data loss. Another extension to {WGrid} is {W*Grid} which extends it by strongly improving network recovery performance from link and/or device failures that may happen due to crashes or battery exhaustion of devices or to temporary obstacles. W*Grid guarantees, by construction, at least two disjoint paths between each couple of nodes. This implies that the recovery in W*Grid occurs without broadcasting transmissions and guaranteeing robustness while drastically reducing the energy consumption. An extensive number of simulations shows the efficiency, robustness and traffic road of resulting networks under several scenarios of device density and of number of coordinates. Performance analysis have been compared to existent algorithms in order to validate the results.
Resumo:
The digital electronic market development is founded on the continuous reduction of the transistors size, to reduce area, power, cost and increase the computational performance of integrated circuits. This trend, known as technology scaling, is approaching the nanometer size. The lithographic process in the manufacturing stage is increasing its uncertainty with the scaling down of the transistors size, resulting in a larger parameter variation in future technology generations. Furthermore, the exponential relationship between the leakage current and the threshold voltage, is limiting the threshold and supply voltages scaling, increasing the power density and creating local thermal issues, such as hot spots, thermal runaway and thermal cycles. In addiction, the introduction of new materials and the smaller devices dimension are reducing transistors robustness, that combined with high temperature and frequently thermal cycles, are speeding up wear out processes. Those effects are no longer addressable only at the process level. Consequently the deep sub-micron devices will require solutions which will imply several design levels, as system and logic, and new approaches called Design For Manufacturability (DFM) and Design For Reliability. The purpose of the above approaches is to bring in the early design stages the awareness of the device reliability and manufacturability, in order to introduce logic and system able to cope with the yield and reliability loss. The ITRS roadmap suggests the following research steps to integrate the design for manufacturability and reliability in the standard CAD automated design flow: i) The implementation of new analysis algorithms able to predict the system thermal behavior with the impact to the power and speed performances. ii) High level wear out models able to predict the mean time to failure of the system (MTTF). iii) Statistical performance analysis able to predict the impact of the process variation, both random and systematic. The new analysis tools have to be developed beside new logic and system strategies to cope with the future challenges, as for instance: i) Thermal management strategy that increase the reliability and life time of the devices acting to some tunable parameter,such as supply voltage or body bias. ii) Error detection logic able to interact with compensation techniques as Adaptive Supply Voltage ASV, Adaptive Body Bias ABB and error recovering, in order to increase yield and reliability. iii) architectures that are fundamentally resistant to variability, including locally asynchronous designs, redundancy, and error correcting signal encodings (ECC). The literature already features works addressing the prediction of the MTTF, papers focusing on thermal management in the general purpose chip, and publications on statistical performance analysis. In my Phd research activity, I investigated the need for thermal management in future embedded low-power Network On Chip (NoC) devices.I developed a thermal analysis library, that has been integrated in a NoC cycle accurate simulator and in a FPGA based NoC simulator. The results have shown that an accurate layout distribution can avoid the onset of hot-spot in a NoC chip. Furthermore the application of thermal management can reduce temperature and number of thermal cycles, increasing the systemreliability. Therefore the thesis advocates the need to integrate a thermal analysis in the first design stages for embedded NoC design. Later on, I focused my research in the development of statistical process variation analysis tool that is able to address both random and systematic variations. The tool was used to analyze the impact of self-timed asynchronous logic stages in an embedded microprocessor. As results we confirmed the capability of self-timed logic to increase the manufacturability and reliability. Furthermore we used the tool to investigate the suitability of low-swing techniques in the NoC system communication under process variations. In this case We discovered the superior robustness to systematic process variation of low-swing links, which shows a good response to compensation technique as ASV and ABB. Hence low-swing is a good alternative to the standard CMOS communication for power, speed, reliability and manufacturability. In summary my work proves the advantage of integrating a statistical process variation analysis tool in the first stages of the design flow.
Resumo:
Nell’ambito della ricerca scientifica nel campo dello sport, la Performance Analysis si sta ritagliando un crescente spazio di interesse. Per Performance Analysis si intende l’analisi della prestazione agonistica sia dal punto di vista biomeccanico che dal punto di vista dell’analisi notazionale. In questa tesi è stata analizzata la prestazione agonistica nel tennistavolo attraverso lo strumento dell’analisi notazionale, partendo dallo studio degli indicatori di prestazione più importanti dal punto di vista tecnico-tattico e dalla loro selezione attraverso uno studio sull’attendibilità nella raccolta dati. L’attenzione è stata posta quindi su un aspetto tecnico originale, il collegamento spostamenti e colpi, ricordando che una buona tecnica di spostamento permette di muoversi rapidamente nella direzione della pallina per effettuare il colpo migliore. Infine, l’obbiettivo principale della tesi è stato quello di confrontare le tre categorie di atleti selezionate: alto livello mondiale maschile (M), alto livello junior europeo (J) ed alto livello mondiale femminile (F). La maggior parte delle azioni cominciano con un servizio corto al centro del tavolo, proseguono con una risposta in push (M) o in flik di rovescio (J). Il colpo che segue è principalmente il top spin di dritto dopo un passo pivot o un top di rovescio senza spostamento. Gli alteti M e J contrattaccano maggiormente con top c. top di dritto e le atlete F prediligono colpi meno spregiudicati, bloccando di rovescio e proseguendo con drive di rovescio. Attraverso lo studio della prestazione di atleti di categorie e generi diversi è possibile migliorare le scelte strategiche prima e durante gli incontri. Le analisi statistiche multivariate (modelli log-lineari) hanno permesso di validare con metodo scientifico sia le procedure già utilizzate in letteratura che quelle innovative messe a punto per la prima volta in occasione di questo studio.
Resumo:
In a world focused on the need to produce energy for a growing population, while reducing atmospheric emissions of carbon dioxide, organic Rankine cycles represent a solution to fulfil this goal. This study focuses on the design and optimization of axial-flow turbines for organic Rankine cycles. From the turbine designer point of view, most of this fluids exhibit some peculiar characteristics, such as small enthalpy drop, low speed of sound, large expansion ratio. A computational model for the prediction of axial-flow turbine performance is developed and validated against experimental data. The model allows to calculate turbine performance within a range of accuracy of ±3%. The design procedure is coupled with an optimization process, performed using a genetic algorithm where the turbine total-to-static efficiency represents the objective function. The computational model is integrated in a wider analysis of thermodynamic cycle units, by providing the turbine optimal design. First, the calculation routine is applied in the context of the Draugen offshore platform, where three heat recovery systems are compared. The turbine performance is investigated for three competing bottoming cycles: organic Rankine cycle (operating cyclopentane), steam Rankine cycle and air bottoming cycle. Findings indicate the air turbine as the most efficient solution (total-to-static efficiency = 0.89), while the cyclopentane turbine results as the most flexible and compact technology (2.45 ton/MW and 0.63 m3/MW). Furthermore, the study shows that, for organic and steam Rankine cycles, the optimal design configurations for the expanders do not coincide with those of the thermodynamic cycles. This suggests the possibility to obtain a more accurate analysis by including the computational model in the simulations of the thermodynamic cycles. Afterwards, the performance analysis is carried out by comparing three organic fluids: cyclopentane, MDM and R245fa. Results suggest MDM as the most effective fluid from the turbine performance viewpoint (total-to-total efficiency = 0.89). On the other hand, cyclopentane guarantees a greater net power output of the organic Rankine cycle (P = 5.35 MW), while R245fa represents the most compact solution (1.63 ton/MW and 0.20 m3/MW). Finally, the influence of the composition of an isopentane/isobutane mixture on both the thermodynamic cycle performance and the expander isentropic efficiency is investigated. Findings show how the mixture composition affects the turbine efficiency and so the cycle performance. Moreover, the analysis demonstrates that the use of binary mixtures leads to an enhancement of the thermodynamic cycle performance.
Resumo:
In the present thesis we address the problem of detecting and localizing a small spherical target with characteristic electrical properties inside a volume of cylindrical shape, representing female breast, with MWI. One of the main works of this project is to properly extend the existing linear inversion algorithm from planar slice to volume reconstruction; results obtained, under the same conditions and experimental setup are reported for the two different approaches. Preliminar comparison and performance analysis of the reconstruction algorithms is performed via numerical simulations in a software-created environment: a single dipole antenna is used for illuminating the virtual breast phantom from different positions and, for each position, the corresponding scattered field value is registered. Collected data are then exploited in order to reconstruct the investigation domain, along with the scatterer position, in the form of image called pseudospectrum. During this process the tumor is modeled as a dielectric sphere of small radius and, for electromagnetic scattering purposes, it's treated as a point-like source. To improve the performance of reconstruction technique, we repeat the acquisition for a number of frequencies in a given range: the different pseudospectra, reconstructed from single frequency data, are incoherently combined with MUltiple SIgnal Classification (MUSIC) method which returns an overall enhanced image. We exploit multi-frequency approach to test the performance of 3D linear inversion reconstruction algorithm while varying the source position inside the phantom and the height of antenna plane. Analysis results and reconstructed images are then reported. Finally, we perform 3D reconstruction from experimental data gathered with the acquisition system in the microwave laboratory at DIFA, University of Bologna for a recently developed breast-phantom prototype; obtained pseudospectrum and performance analysis for the real model are reported.
Resumo:
This thesis presents two frameworks- a software framework and a hardware core manager framework- which, together, can be used to develop a processing platform using a distributed system of field-programmable gate array (FPGA) boards. The software framework providesusers with the ability to easily develop applications that exploit the processing power of FPGAs while the hardware core manager framework gives users the ability to configure and interact with multiple FPGA boards and/or hardware cores. This thesis describes the design and development of these frameworks and analyzes the performance of a system that was constructed using the frameworks. The performance analysis included measuring the effect of incorporating additional hardware components into the system and comparing the system to a software-only implementation. This work draws conclusions based on the provided results of the performance analysis and offers suggestions for future work.
Resumo:
Hepatitis C virus (HCV) vaccine efficacy may crucially depend on immunogen length and coverage of viral sequence diversity. However, covering a considerable proportion of the circulating viral sequence variants would likely require long immunogens, which for the conserved portions of the viral genome, would contain unnecessarily redundant sequence information. In this study, we present the design and in vitro performance analysis of a novel "epitome" approach that compresses frequent immune targets of the cellular immune response against HCV into a shorter immunogen sequence. Compression of immunological information is achieved by partial overlapping shared sequence motifs between individual epitopes. At the same time, sequence diversity coverage is provided by taking advantage of emerging cross-reactivity patterns among epitope variants so that epitope variants associated with the broadest variant cross-recognition are preferentially included. The processing and presentation analysis of specific epitopes included in such a compressed, in vitro-expressed HCV epitome indicated effective processing of a majority of tested epitopes, although re-presentation of some epitopes may require refined sequence design. Together, the present study establishes the epitome approach as a potential powerful tool for vaccine immunogen design, especially suitable for the induction of cellular immune responses against highly variable pathogens.
Resumo:
Financial, economic, and biological data collected from cow-calf producers who participated in the Illinois and Iowa Standardized Performance Analysis (SPA) programs were used in this study. Data used were collected for the 1996 through 1999 calendar years, with each herd within year representing one observation. This resulted in a final database of 225 observations (117 from Iowa and 108 from Illinois) from commercial herds with a range in size from 20 to 373 cows. Two analyses were conducted, one utilizing financial cost of production data, the other economic cost of production data. Each observation was analyzed as the difference from the mean for that given year. The independent variable utilized in both the financial and economic models as an indicator of profit was return to unpaid labor and management per cow (RLM). Used as dependent variables were the five factors that make up total annual cow cost: feed cost, operating cost, depreciation cost, capital charge, and hired labor, all on an annual cost per cow basis. In the economic analysis, family labor was also included. Production factors evaluated as dependent variables in both models were calf weight, calf price, cull weight, cull price, weaning percentage, and calving distribution. Herd size and investment were also analyzed. All financial factors analyzed were significantly correlated to RLM (P < .10) except cull weight, and cull price. All economic factors analyzed were significantly correlated to RLM (P < .10) except calf weight, cull weight and cull price. Results of the financial prediction equation indicate that there are eight measurements capable of explaining over 82 percent of the farm-to-farm variation in RLM. Feed cost is the overriding factor driving RLM in both the financial and economic stepwise regression analyses. In both analyses over 50 percent of the herd-to-herd variation in RLM could be explained by feed cost. Financial feed cost is correlated (P < .001) to operating cost, depreciation cost, and investment. Economic feed cost is correlated (P < .001) with investment and operating cost, as well as capital charge. Operating cost, depreciation, and capital charge were all negatively correlated (P < .10) to herd size, and positively correlated (P < .01) to feed cost in both analyses. Operating costs were positively correlated with capital charge and investment (P < .01) in both analyses. In the financial regression model, depreciation cost was the second critical factor explaining almost 9 percent of the herd-to-herd variation in RLM followed by operating cost (5 percent). Calf weight had a greater impact than calf price on RLM in both the financial and economic regression models. Calf weight was the fourth indicator of RLM in the financial model and was similar in magnitude to operating cost. Investment was not a significant variable in either regression model; however, it was highly correlated to a number of the significant cost variables including feed cost, depreciation cost, and operating cost (P < .001, financial; P < .10, economic). Cost factors were far more influential in driving RLM than production, reproduction, or producer controlled marketing factors. Of these cost factors, feed cost had by far the largest impact. As producers focus attention on factors that affect the profitability of the operation, feed cost is the most critical control point because it was responsible for over 50 percent of the herd-to-herd variation in profit.
Resumo:
Fifteen beef cow-calf producers in southern Iowa were selected based on locality, management level, historical date of grazing initiation and desire to participate in the project. In 1997 and 1998, all producers kept records of production and economic data using the Integrated Resource Management-Standardized Performance Analysis (IRM-SPA) records program. At the initiation of grazing on each farm in 1997 and 1998, Julian date, degree-days, cumulative precipitation, and soil moisture, phosphorus, and potassium concentrations were determined. Also determined were pH, temperature, and load-bearing capacity; and forage mass, sward height, morphology and dry matter concentration. Over the grazing season, forage production, measured both by cumulative mass and sward height, forage in vitro digestible dry matter concentration, and crude protein concentration were determined monthly. In the fall of 1996 the primary species in pastures on farms used in this project were cool-season grasses, which composed 76% of the live forage whereas legumes and weeds composed 8.3 and 15.3%, respectively. The average number of paddocks was 4.1, reflecting a low intensity rotational stocking system on most farms. The average dates of grazing initiation were May 5 and April 29 in 1997 and 1998, respectively, with standard deviations of 14.8 and 14.1 days. Because the average soil moisture of 23% was dry and did not differ between years, it seems that most producers delayed the initiation of grazing to avoid muddy conditions by initiating grazing at a nearly equal soil moisture. However, Julian date, degree-days, soil temperature and morphology index at grazing initiation were negatively related to seasonal forage production, measured as mass or sward height, in 1998. And forage mass and height at grazing initiation were negatively related to seasonal forage production, measured as sward height, in 1997. Moreover, the concentrations of digestible dry matter at the initiation of and during the grazing season and the concentrations of crude protein during the grazing season were lower than desired for optimal animal performance. Because the mean seasonal digestible dry matter concentration was negatively related to initial forage mass in 1997 and mean seasonal crude proteins concentrations were negatively related to the Julian date, degree-days, and morphology indeces in both years, it seems that delaying the initiation of grazing until pasture soils are not muddy, is limiting the quality as well as the quantity of pasture forage. In 1997, forage production and digestibility were positively related to the soil phosphorus concentration. Soil potassium concentration was positively related to forage digestibility in 1997 and forage production and crude protein concentration in 1998. Increasing the number of paddocks increased forage production, measured as sward height, in 1997, and forage digestible dry matter concentration in 1998. Increasing yields or the concentrations of digestible dry matter or crude protein of pasture forage reduced the costs of purchased feed per cow.
Resumo:
This paper deals with scheduling batch (i.e., discontinuous), continuous, and semicontinuous production in process industries (e.g., chemical, pharmaceutical, or metal casting industries) where intermediate storage facilities and renewable resources (processing units and manpower) of limited capacity have to be observed. First, different storage configurations typical of process industries are discussed. Second, a basic scheduling problem covering the three above production modes is presented. Third, (exact and truncated) branch-and-bound methods for the basic scheduling problem and the special case of batch scheduling are proposed and subjected to an experimental performance analysis. The solution approach presented is flexible and in principle simple, and it can (approximately) solve relatively large problem instances with sufficient accuracy.
Resumo:
This paper introduces an area- and power-efficient approach for compressive recording of cortical signals used in an implantable system prior to transmission. Recent research on compressive sensing has shown promising results for sub-Nyquist sampling of sparse biological signals. Still, any large-scale implementation of this technique faces critical issues caused by the increased hardware intensity. The cost of implementing compressive sensing in a multichannel system in terms of area usage can be significantly higher than a conventional data acquisition system without compression. To tackle this issue, a new multichannel compressive sensing scheme which exploits the spatial sparsity of the signals recorded from the electrodes of the sensor array is proposed. The analysis shows that using this method, the power efficiency is preserved to a great extent while the area overhead is significantly reduced resulting in an improved power-area product. The proposed circuit architecture is implemented in a UMC 0.18 [Formula: see text]m CMOS technology. Extensive performance analysis and design optimization has been done resulting in a low-noise, compact and power-efficient implementation. The results of simulations and subsequent reconstructions show the possibility of recovering fourfold compressed intracranial EEG signals with an SNR as high as 21.8 dB, while consuming 10.5 [Formula: see text]W of power within an effective area of 250 [Formula: see text]m × 250 [Formula: see text]m per channel.
Resumo:
The article investigates the intriguing interplay of digital comics and live-action elements in a detailed performance analysis of TeZukA (2011) by choreographer Sidi Larbi Cherkaoui. This dance theatre production enacts the life story of Osamu Tezuka and some of his famous manga characters, interweaving performers and musicians with large-scale projections of the mangaka’s digitised comics. During the show, the dancers perform different ‘readings’ of the projected manga imagery: e.g. they swipe panels as if using portable touchscreen displays, move synchronously to animated speed lines, and create the illusion of being drawn into the stories depicted on the screen. The main argument is that TeZukA makes visible, demonstrates and reflects upon different ways of delivering, reading and interacting with digital comics. In order to verify this argument, the paper uses ideas developed in comics and theatre studies to draw more specifically on the use of digital comics in this particular performance.
Resumo:
Multiuser multiple-input multiple-output (MIMO) downlink (DL) transmission schemes experience both multiuser interference as well as inter-antenna interference. The singular value decomposition provides an appropriate mean to process channel information and allows us to take the individual user’s channel characteristics into account rather than treating all users channels jointly as in zero-forcing (ZF) multiuser transmission techniques. However, uncorrelated MIMO channels has attracted a lot of attention and reached a state of maturity. By contrast, the performance analysis in the presence of antenna fading correlation, which decreases the channel capacity, requires substantial further research. The joint optimization of the number of activated MIMO layers and the number of bits per symbol along with the appropriate allocation of the transmit power shows that not necessarily all user-specific MIMO layers has to be activated in order to minimize the overall BER under the constraint of a given fixed data throughput.
Resumo:
The integration of powerful partial evaluation methods into practical compilers for logic programs is still far from reality. This is related both to 1) efficiency issues and to 2) the complications of dealing with practical programs. Regarding efnciency, the most successful unfolding rules used nowadays are based on structural orders applied over (covering) ancestors, i.e., a subsequence of the atoms selected during a derivation. Unfortunately, maintaining the structure of the ancestor relation during unfolding introduces significant overhead. We propose an efficient, practical local unfolding rule based on the notion of covering ancestors which can be used in combination with any structural order and allows a stack-based implementation without losing any opportunities for specialization. Regarding the second issue, we propose assertion-based techniques which allow our approach to deal with real programs that include (Prolog) built-ins and external predicates in a very extensible manner. Finally, we report on our implementation of these techniques in a practical partial evaluator, embedded in a state of the art compiler which uses global analysis extensively (the Ciao compiler and, specifically, its preprocessor CiaoPP). The performance analysis of the resulting system shows that our techniques, in addition to dealing with practical programs, are also significantly more efficient in time and somewhat more efficient in memory than traditional tree-based implementations.