991 resultados para Non-representational methodologies


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to investigate, in a population of crossbred cattle, the obtainment of the non-additive genetic effects for the characteristics weight at 205 and 390 days and scrotal circumference, and to evaluate the consideration of these effects in the prediction of breeding values of sires using different estimation methodologies. In method 1, the data were pre-adjusted for the non-additive effects obtained by least squares means method in a model that considered the direct additive, maternal and non-additive fixed genetic effects, the direct and total maternal heterozygosities, and epistasis. In method 2, the non-additive effects were considered covariates in genetic model. Genetic values for adjusted and non-adjusted data were predicted considering additive direct and maternal effects, and for weight at 205 days, also the permanent environmental effect, as random effects in the model. The breeding values of the categories of sires considered for the weight characteristic at 205 days were organized in files, in order to verify alterations in the magnitude of the predictions and ranking of animals in the two methods of correction data for the non-additives effects. The non-additive effects were not similar in magnitude and direction in the two estimation methods used, nor for the characteristics evaluated. Pearson and Spearman correlations between breeding values were higher than 0.94, and the use of different methods does not imply changes in the selection of animals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The running innovation processes of the microwave transistor technologies, used in the implementation of microwave circuits, have to be supported by the study and development of proper design methodologies which, depending on the applications, will fully exploit the technology potentialities. After the choice of the technology to be used in the particular application, the circuit designer has few degrees of freedom when carrying out his design; in the most cases, due to the technological constrains, all the foundries develop and provide customized processes optimized for a specific performance such as power, low-noise, linearity, broadband etc. For these reasons circuit design is always a “compromise”, an investigation for the best solution to reach a trade off between the desired performances. This approach becomes crucial in the design of microwave systems to be used in satellite applications; the tight space constraints impose to reach the best performances under proper electrical and thermal de-rated conditions, respect to the maximum ratings provided by the used technology, in order to ensure adequate levels of reliability. In particular this work is about one of the most critical components in the front-end of a satellite antenna, the High Power Amplifier (HPA). The HPA is the main power dissipation source and so the element which mostly engrave on space, weight and cost of telecommunication apparatus; it is clear from the above reasons that design strategies addressing optimization of power density, efficiency and reliability are of major concern. Many transactions and publications demonstrate different methods for the design of power amplifiers, highlighting the availability to obtain very good levels of output power, efficiency and gain. Starting from existing knowledge, the target of the research activities summarized in this dissertation was to develop a design methodology capable optimize power amplifier performances complying all the constraints imposed by the space applications, tacking into account the thermal behaviour in the same manner of the power and the efficiency. After a reminder of the existing theories about the power amplifier design, in the first section of this work, the effectiveness of the methodology based on the accurate control of the dynamic Load Line and her shaping will be described, explaining all steps in the design of two different kinds of high power amplifiers. Considering the trade-off between the main performances and reliability issues as the target of the design activity, we will demonstrate that the expected results could be obtained working on the characteristics of the Load Line at the intrinsic terminals of the selected active device. The methodology proposed in this first part is based on the assumption that designer has the availability of an accurate electrical model of the device; the variety of publications about this argument demonstrates that it is so difficult to carry out a CAD model capable to taking into account all the non-ideal phenomena which occur when the amplifier operates at such high frequency and power levels. For that, especially for the emerging technology of Gallium Nitride (GaN), in the second section a new approach for power amplifier design will be described, basing on the experimental characterization of the intrinsic Load Line by means of a low frequency high power measurements bench. Thanks to the possibility to develop my Ph.D. in an academic spin-off, MEC – Microwave Electronics for Communications, the results of this activity has been applied to important research programs requested by space agencies, with the aim support the technological transfer from universities to industrial world and to promote a science-based entrepreneurship. For these reasons the proposed design methodology will be explained basing on many experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis studies the economic and financial conditions of Italian households, by using microeconomic data of the Survey on Household Income and Wealth (SHIW) over the period 1998-2006. It develops along two lines of enquiry. First it studies the determinants of households holdings of assets and liabilities and estimates their correlation degree. After a review of the literature, it estimates two non-linear multivariate models on the interactions between assets and liabilities with repeated cross-sections. Second, it analyses households financial difficulties. It defines a quantitative measure of financial distress and tests, by means of non-linear dynamic probit models, whether the probability of experiencing financial difficulties is persistent over time. Chapter 1 provides a critical review of the theoretical and empirical literature on the estimation of assets and liabilities holdings, on their interactions and on households net wealth. The review stresses the fact that a large part of the literature explain households debt holdings as a function, among others, of net wealth, an assumption that runs into possible endogeneity problems. Chapter 2 defines two non-linear multivariate models to study the interactions between assets and liabilities held by Italian households. Estimation refers to a pooling of cross-sections of SHIW. The first model is a bivariate tobit that estimates factors affecting assets and liabilities and their degree of correlation with results coherent with theoretical expectations. To tackle the presence of non normality and heteroskedasticity in the error term, generating non consistent tobit estimators, semi-parametric estimates are provided that confirm the results of the tobit model. The second model is a quadrivariate probit on three different assets (safe, risky and real) and total liabilities; the results show the expected patterns of interdependence suggested by theoretical considerations. Chapter 3 reviews the methodologies for estimating non-linear dynamic panel data models, drawing attention to the problems to be dealt with to obtain consistent estimators. Specific attention is given to the initial condition problem raised by the inclusion of the lagged dependent variable in the set of explanatory variables. The advantage of using dynamic panel data models lies in the fact that they allow to simultaneously account for true state dependence, via the lagged variable, and unobserved heterogeneity via individual effects specification. Chapter 4 applies the models reviewed in Chapter 3 to analyse financial difficulties of Italian households, by using information on net wealth as provided in the panel component of the SHIW. The aim is to test whether households persistently experience financial difficulties over time. A thorough discussion is provided of the alternative approaches proposed by the literature (subjective/qualitative indicators versus quantitative indexes) to identify households in financial distress. Households in financial difficulties are identified as those holding amounts of net wealth lower than the value corresponding to the first quartile of net wealth distribution. Estimation is conducted via four different methods: the pooled probit model, the random effects probit model with exogenous initial conditions, the Heckman model and the recently developed Wooldridge model. Results obtained from all estimators accept the null hypothesis of true state dependence and show that, according with the literature, less sophisticated models, namely the pooled and exogenous models, over-estimate such persistence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The following Ph.D work was mainly focused on catalysis, as a key technology, to achieve the objectives of sustainable (green) chemistry. After introducing the concepts of sustainable (green) chemistry and an assessment of new sustainable chemical technologies, the relationship between catalysis and sustainable (green) chemistry was briefly discussed and illustrated via an analysis of some selected and relevant examples. Afterwards, as a continuation of the ongoing interest in Dr. Marco Bandini’s group on organometallic and organocatalytic processes, I addressed my efforts to the design and development of novel catalytic green methodologies for the synthesis of enantiomerically enriched molecules. In the first two projects the attention was focused on the employment of solid supports to carry out reactions that still remain a prerogative of omogeneous catalysis. Firstly, particular emphasis was addressed to the discovery of catalytic enantioselective variants of nitroaldol condensation (commonly termed Henry reaction), using a complex consisting in a polyethylene supported diamino thiopene (DATx) ligands and copper as active species. In the second project, a new class of electrochemically modified surfaces with DATx palladium complexes was presented. The DATx-graphite system proved to be efficient in promoting the Suzuki reaction. Moreover, in collaboration with Prof. Wolf at the University of British Columbia (Vancouver), cyclic voltammetry studies were reported. This study disclosed new opportunities for carbon–carbon forming processes by using heterogeneous, electrodeposited catalyst films. A straightforward metal-free catalysis allowed the exploration around the world of organocatalysis. In fact, three different and novel methodologies, using Cinchona, Guanidine and Phosphine derivatives, were envisioned in the three following projects. An interesting variant of nitroaldol condensation with simple trifluoromethyl ketones and also their application in a non-conventional activation of indolyl cores by Friedel-Crafts-functionalization, led to two novel synthetic protocols. These approaches allowed the preparation of synthetically useful trifluoromethyl derivatives bearing quaternary stereocenters. Lastly, in the sixth project the first γ-alkylation of allenoates with conjugated carbonyl compounds was envisioned. In the last part of this Ph.D thesis bases on an extra-ordinary collaboration with Prof. Balzani and Prof. Gigli, I was involved in the synthesis and characterization of a new type of heteroleptic cyclometaled-Ir(III) complexes, bearing bis-oxazolines (BOXs) as ancillary ligands. The new heteroleptic complexes were fully characterized and in order to examine the electroluminescent properties of FIrBOX(CH2), an Organic Light Emitting Device was realized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monitoring foetal health is a very important task in clinical practice to appropriately plan pregnancy management and delivery. In the third trimester of pregnancy, ultrasound cardiotocography is the most employed diagnostic technique: foetal heart rate and uterine contractions signals are simultaneously recorded and analysed in order to ascertain foetal health. Because ultrasound cardiotocography interpretation still lacks of complete reliability, new parameters and methods of interpretation, or alternative methodologies, are necessary to further support physicians’ decisions. To this aim, in this thesis, foetal phonocardiography and electrocardiography are considered as different techniques. Further, variability of foetal heart rate is thoroughly studied. Frequency components and their modifications can be analysed by applying a time-frequency approach, for a distinct understanding of the spectral components and their change over time related to foetal reactions to internal and external stimuli (such as uterine contractions). Such modifications of the power spectrum can be a sign of autonomic nervous system reactions and therefore represent additional, objective information about foetal reactivity and health. However, some limits of ultrasonic cardiotocography still remain, such as in long-term foetal surveillance, which is often recommendable mainly in risky pregnancies. In these cases, the fully non-invasive acoustic recording, foetal phonocardiography, through maternal abdomen, represents a valuable alternative to the ultrasonic cardiotocography. Unfortunately, the so recorded foetal heart sound signal is heavily loaded by noise, thus the determination of the foetal heart rate raises serious signal processing issues. A new algorithm for foetal heart rate estimation from foetal phonocardiographic recordings is presented in this thesis. Different filtering and enhancement techniques, to enhance the first foetal heart sounds, were applied, so that different signal processing techniques were implemented, evaluated and compared, by identifying the strategy characterized on average by the best results. In particular, phonocardiographic signals were recorded simultaneously to ultrasonic cardiotocographic signals in order to compare the two foetal heart rate series (the one estimated by the developed algorithm and the other provided by cardiotocographic device). The algorithm performances were tested on phonocardiographic signals recorded on pregnant women, showing reliable foetal heart rate signals, very close to the ultrasound cardiotocographic recordings, considered as reference. The algorithm was also tested by using a foetal phonocardiographic recording simulator developed and presented in this research thesis. The target was to provide a software for simulating recordings relative to different foetal conditions and recordings situations and to use it as a test tool for comparing and assessing different foetal heart rate extraction algorithms. Since there are few studies about foetal heart sounds time characteristics and frequency content and the available literature is poor and not rigorous in this area, a data collection pilot study was also conducted with the purpose of specifically characterising both foetal and maternal heart sounds. Finally, in this thesis, the use of foetal phonocardiographic and electrocardiographic methodology and their combination, are presented in order to detect foetal heart rate and other functioning anomalies. The developed methodologies, suitable for longer-term assessment, were able to detect heart beat events correctly, such as first and second heart sounds and QRS waves. The detection of such events provides reliable measures of foetal heart rate, potentially information about measurement of the systolic time intervals and foetus circulatory impedance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been demonstrated that iodine does have an important influence on atmospheric chemistry, especially the formation of new particles and the enrichment of iodine in marine aerosols. It was pointed out that the most probable chemical species involved in the production or growth of these particles are iodine oxides, produced photochemically from biogenic halocarbon emissions and/or iodine emission from the sea surface. However, the iodine chemistry from gaseous to particulate phase in the coastal atmosphere and the chemical nature of the condensing iodine species are still not understood. A Tenax / Carbotrap adsorption sampling technique and a thermo-desorption / cryo-trap / GC-MS system has been further developed and improved for the volatile organic iodine species in the gas phase. Several iodo-hydrocarbons such as CH3I, C2H5I, CH2ICl, CH2IBr and CH2I2 etc., have been measured in samples from a calibration test gas source (standards), real air samples and samples from seaweeds / macro-algae emission experiments. A denuder sampling technique has been developed to characterise potential precursor compounds of coastal particle formation processes, such as molecular iodine in the gas phase. Starch, TMAH (TetraMethylAmmonium Hydroxide) and TBAH (TetraButylAmmonium Hydroxide) coated denuders were tested for their efficiencies to collect I2 at the inner surface, followed by a TMAH extraction and ICP/MS determination, adding tellurium as an internal standard. The developed method has been proved to be an effective, accurate and suitable process for I2 measurement in the field, with the estimated detection limit of ~0.10 ng∙L-1 for a sampling volume of 15 L. An H2O/TMAH-Extraction-ICP/MS method has been developed for the accurate and sensitive determination of iodine species in tropospheric aerosol particles. The particle samples were collected on cellulose-nitrate filters using conventional filter holders or on cellulose nitrate/tedlar-foils using a 5-stage Berner impactor for size-segregated particle analysis. The water soluble species as IO3- and I- were separated by anion exchanging process after water extraction. Non-water soluble species including iodine oxide and organic iodine were digested and extracted by TMAH. Afterwards the triple samples were analysed by ICP/MS. The detection limit for particulate iodine was determined to be 0.10~0.20 ng•m-3 for sampling volumes of 40~100 m3. The developed methods have been used in two field measurements in May 2002 and September 2003, at and around the Mace Head Atmospheric Research Station (MHARS) located at the west coast of Ireland. Elemental iodine as a precursor of the iodine chemistry in the coastal atmosphere, was determined in the gas phase at a seaweed hot-spot around the MHARS, showing I2 concentrations were in the range of 0~1.6 ng∙L-1 and indicating a positive correlation with the ozone concentration. A seaweed-chamber experiment performed at the field measurement station showed that the I2 emission rate from macro-algae was in the range of 0.019~0.022 ng•min-1•kg-1. During these experiments, nanometer-particle concentrations were obtained from the Scanning Mobility Particle Sizer (SMPS) measurements. Particle number concentrations were found to have a linear correlation with elemental iodine in the gas phase of the seaweeds chamber, showing that gaseous I2 is one of the important precursors of the new particle formation in the coastal atmosphere. Iodine contents in the particle phase were measured in both field campaigns at and around the field measurement station. Total iodine concentrations were found to be in the range of 1.0 ~ 21.0 ng∙m-3 in the PM2.5 samples. A significant correlation between the total iodine concentrations and the nanometer-particle number concentrations was observed. The particulate iodine species analysis indicated that iodide contents are usually higher than those of iodate in all samples, with ratios in the range of 2~5:1. It is possible that those water soluble iodine species are transferred through the sea-air interface into the particle phase. The ratio of water soluble (iodate + iodide) and non-water soluble species (probably iodine oxide and organic iodine compounds) was observed to be in the range of 1:1 to 1:2. It appears that higher concentrated non-water soluble species, as the products of the photolysis from the gas phase into the particle phase, can be obtained in those samples while the nucleation events occur. That supports the idea that iodine chemistry in the coastal boundary layer is linked with new particle formation events. Furthermore, artificial aerosol particles were formed from gaseous iodine sources (e.g. CH2I2) using a laboratory reaction-chamber experiment, in which the reaction constant of the CH2I2 photolysis was calculated to be based upon the first order reaction kinetic. The end products of iodine chemistry in the particle phase were identified and quantified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il progetto di ricerca di questa tesi è stato focalizzato sulla sintesi di tre classi di molecole: β-lattami, Profeni e α-amminonitrili, utilizzando moderne tecniche di sintesi organica, metodologie ecosostenibili e strategie biocatalitiche. I profeni sono una categoria di antiinfiammatori molto diffusa e in particolare abbiamo sviluppato e ottimizzato una procedura in due step per ottenere (S)-Profeni da 2-arilpropanali raceme. Il primo step consiste in una bioriduzione delle aldeidi per dare i relativi (S)-2-Aril Propanoli tramite un processo DKR mediato dall’enzima Horse Liver Alcohol Dehydrogenase. Il secondo, l’ossidazione a (S)-Profeni, è promossa da NaClO2 e TEMPO come catalizzatore. Con lo scopo di migliorare il processo, in collaborazione con il gruppo di ricerca di Francesca Paradisi all’University College Dublino abbiamo immobilizzato l’enzima HLADH, ottenendo buone rese e una migliore enantioselettività. Abbiamo inoltre proposto un interessante approccio enzimatico per l’ossidazione degli (S)-2-Aril Propanoli utilizzando una laccasi da Trametes Versicolor. L’anello β-lattamico è un eterociclo molto importante, noto per essere un interessante farmacoforo. Abbiamo sintetizzato nuovi N-metiltio beta-lattami, che hanno mostrato un’attività antibatterica molto interessante contro ceppi resistenti di Staphilococcus Aureus prelevati da pazienti affetti da fibrosis cistica. Abbiamo poi coniugato gruppi polifenolici a questi nuovi β-lattami ottenendo molecule antiossidanti e antibatteriche, cioè con attività duale. Abbiamo poi sintetizzato un nuovo ibrido retinoide-betalattame che ha indotto differenziazione si cellule di neuroblastoma. Abbiamo poi sfruttato la reazione di aperture dell’anello monobattamico tramite enzimi idrolitici, con lo scopo di ottenere β-amminoacidi chirali desimmetrizzati come il monoestere dell’acido β–amminoglutammico. Per quando riguarda gli α-amminonitrili, è stato sviluppato un protocollo di Strecker. Le reazioni sono state molto efficienti utilizzando come fonte di cianuro l’acetone cianidrina in acqua, utilizzando differenti aldeidi e chetoni, ammine primarie e secondarie. Per mettere a punto una versione asimmetrica del protocollo, abbiamo usato ammine chirali con lo scopo di ottenere nuovi α-amminonitrili chirali.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study focuses on the use of metabonomics applications in measuring fish freshness in various biological species and in evaluating how they are stored. This metabonomic approach is innovative and is based upon molecular profiling through nuclear magnetic resonance (NMR). On one hand, the aim is to ascertain if a type of fish has maintained, within certain limits, its sensory and nutritional characteristics after being caught; and on the second, the research observes the alterations in the product’s composition. The spectroscopic data obtained through experimental nuclear magnetic resonance, 1H-NMR, of the molecular profiles of the fish extracts are compared with those obtained on the same samples through analytical and conventional methods now in practice. These second methods are used to obtain chemical indices of freshness through biochemical and microbial degradation of the proteic nitrogen compounds and not (trimethylamine, N-(CH3)3, nucleotides, amino acids, etc.). At a later time, a principal components analysis (PCA) and a linear discriminant analysis (PLS-DA) are performed through a metabonomic approach to condense the temporal evolution of freshness into a single parameter. In particular, the first principal component (PC1) under both storage conditions (4 °C and 0 °C) represents the component together with the molecular composition of the samples (through 1H-NMR spectrum) evolving during storage with a very high variance. The results of this study give scientific evidence supporting the objective elements evaluating the freshness of fish products showing those which can be labeled “fresh fish.”

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays the rise of non-recurring engineering (NRE) costs associated with complexity is becoming a major factor in SoC design, limiting both scaling opportunities and the flexibility advantages offered by the integration of complex computational units. The introduction of embedded programmable elements can represent an appealing solution, able both to guarantee the desired flexibility and upgradabilty and to widen the SoC market. In particular embedded FPGA (eFPGA) cores can provide bit-level optimization for those applications which benefits from synthesis, paying on the other side in terms of performance penalties and area overhead with respect to standard cell ASIC implementations. In this scenario this thesis proposes a design methodology for a synthesizable programmable device designed to be embedded in a SoC. A soft-core embedded FPGA (eFPGA) is hence presented and analyzed in terms of the opportunities given by a fully synthesizable approach, following an implementation flow based on Standard-Cell methodology. A key point of the proposed eFPGA template is that it adopts a Multi-Stage Switching Network (MSSN) as the foundation of the programmable interconnects, since it can be efficiently synthesized and optimized through a standard cell based implementation flow, ensuring at the same time an intrinsic congestion-free network topology. The evaluation of the flexibility potentialities of the eFPGA has been performed using different technology libraries (STMicroelectronics CMOS 65nm and BCD9s 0.11μm) through a design space exploration in terms of area-speed-leakage tradeoffs, enabled by the full synthesizability of the template. Since the most relevant disadvantage of the adopted soft approach, compared to a hardcore, is represented by a performance overhead increase, the eFPGA analysis has been made targeting small area budgets. The generation of the configuration bitstream has been obtained thanks to the implementation of a custom CAD flow environment, and has allowed functional verification and performance evaluation through an application-aware analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent decades, full electric and hybrid electric vehicles have emerged as an alternative to conventional cars due to a range of factors, including environmental and economic aspects. These vehicles are the result of considerable efforts to seek ways of reducing the use of fossil fuel for vehicle propulsion. Sophisticated technologies such as hybrid and electric powertrains require careful study and optimization. Mathematical models play a key role at this point. Currently, many advanced mathematical analysis tools, as well as computer applications have been built for vehicle simulation purposes. Given the great interest of hybrid and electric powertrains, along with the increasing importance of reliable computer-based models, the author decided to integrate both aspects in the research purpose of this work. Furthermore, this is one of the first final degree projects held at the ETSII (Higher Technical School of Industrial Engineers) that covers the study of hybrid and electric propulsion systems. The present project is based on MBS3D 2.0, a specialized software for the dynamic simulation of multibody systems developed at the UPM Institute of Automobile Research (INSIA). Automobiles are a clear example of complex multibody systems, which are present in nearly every field of engineering. The work presented here benefits from the availability of MBS3D software. This program has proven to be a very efficient tool, with a highly developed underlying mathematical formulation. On this basis, the focus of this project is the extension of MBS3D features in order to be able to perform dynamic simulations of hybrid and electric vehicle models. This requires the joint simulation of the mechanical model of the vehicle, together with the model of the hybrid or electric powertrain. These sub-models belong to completely different physical domains. In fact the powertrain consists of energy storage systems, electrical machines and power electronics, connected to purely mechanical components (wheels, suspension, transmission, clutch…). The challenge today is to create a global vehicle model that is valid for computer simulation. Therefore, the main goal of this project is to apply co-simulation methodologies to a comprehensive model of an electric vehicle, where sub-models from different areas of engineering are coupled. The created electric vehicle (EV) model consists of a separately excited DC electric motor, a Li-ion battery pack, a DC/DC chopper converter and a multibody vehicle model. Co-simulation techniques allow car designers to simulate complex vehicle architectures and behaviors, which are usually difficult to implement in a real environment due to safety and/or economic reasons. In addition, multi-domain computational models help to detect the effects of different driving patterns and parameters and improve the models in a fast and effective way. Automotive designers can greatly benefit from a multidisciplinary approach of new hybrid and electric vehicles. In this case, the global electric vehicle model includes an electrical subsystem and a mechanical subsystem. The electrical subsystem consists of three basic components: electric motor, battery pack and power converter. A modular representation is used for building the dynamic model of the vehicle drivetrain. This means that every component of the drivetrain (submodule) is modeled separately and has its own general dynamic model, with clearly defined inputs and outputs. Then, all the particular submodules are assembled according to the drivetrain configuration and, in this way, the power flow across the components is completely determined. Dynamic models of electrical components are often based on equivalent circuits, where Kirchhoff’s voltage and current laws are applied to draw the algebraic and differential equations. Here, Randles circuit is used for dynamic modeling of the battery and the electric motor is modeled through the analysis of the equivalent circuit of a separately excited DC motor, where the power converter is included. The mechanical subsystem is defined by MBS3D equations. These equations consider the position, velocity and acceleration of all the bodies comprising the vehicle multibody system. MBS3D 2.0 is entirely written in MATLAB and the structure of the program has been thoroughly studied and understood by the author. MBS3D software is adapted according to the requirements of the applied co-simulation method. Some of the core functions are modified, such as integrator and graphics, and several auxiliary functions are added in order to compute the mathematical model of the electrical components. By coupling and co-simulating both subsystems, it is possible to evaluate the dynamic interaction among all the components of the drivetrain. ‘Tight-coupling’ method is used to cosimulate the sub-models. This approach integrates all subsystems simultaneously and the results of the integration are exchanged by function-call. This means that the integration is done jointly for the mechanical and the electrical subsystem, under a single integrator and then, the speed of integration is determined by the slower subsystem. Simulations are then used to show the performance of the developed EV model. However, this project focuses more on the validation of the computational and mathematical tool for electric and hybrid vehicle simulation. For this purpose, a detailed study and comparison of different integrators within the MATLAB environment is done. Consequently, the main efforts are directed towards the implementation of co-simulation techniques in MBS3D software. In this regard, it is not intended to create an extremely precise EV model in terms of real vehicle performance, although an acceptable level of accuracy is achieved. The gap between the EV model and the real system is filled, in a way, by introducing the gas and brake pedals input, which reflects the actual driver behavior. This input is included directly in the differential equations of the model, and determines the amount of current provided to the electric motor. For a separately excited DC motor, the rotor current is proportional to the traction torque delivered to the car wheels. Therefore, as it occurs in the case of real vehicle models, the propulsion torque in the mathematical model is controlled through acceleration and brake pedal commands. The designed transmission system also includes a reduction gear that adapts the torque coming for the motor drive and transfers it. The main contribution of this project is, therefore, the implementation of a new calculation path for the wheel torques, based on performance characteristics and outputs of the electric powertrain model. Originally, the wheel traction and braking torques were input to MBS3D through a vector directly computed by the user in a MATLAB script. Now, they are calculated as a function of the motor current which, in turn, depends on the current provided by the battery pack across the DC/DC chopper converter. The motor and battery currents and voltages are the solutions of the electrical ODE (Ordinary Differential Equation) system coupled to the multibody system. Simultaneously, the outputs of MBS3D model are the position, velocity and acceleration of the vehicle at all times. The motor shaft speed is computed from the output vehicle speed considering the wheel radius, the gear reduction ratio and the transmission efficiency. This motor shaft speed, somehow available from MBS3D model, is then introduced in the differential equations corresponding to the electrical subsystem. In this way, MBS3D and the electrical powertrain model are interconnected and both subsystems exchange values resulting as expected with tight-coupling approach.When programming mathematical models of complex systems, code optimization is a key step in the process. A way to improve the overall performance of the integration, making use of C/C++ as an alternative programming language, is described and implemented. Although this entails a higher computational burden, it leads to important advantages regarding cosimulation speed and stability. In order to do this, it is necessary to integrate MATLAB with another integrated development environment (IDE), where C/C++ code can be generated and executed. In this project, C/C++ files are programmed in Microsoft Visual Studio and the interface between both IDEs is created by building C/C++ MEX file functions. These programs contain functions or subroutines that can be dynamically linked and executed from MATLAB. This process achieves reductions in simulation time up to two orders of magnitude. The tests performed with different integrators, also reveal the stiff character of the differential equations corresponding to the electrical subsystem, and allow the improvement of the cosimulation process. When varying the parameters of the integration and/or the initial conditions of the problem, the solutions of the system of equations show better dynamic response and stability, depending on the integrator used. Several integrators, with variable and non-variable step-size, and for stiff and non-stiff problems are applied to the coupled ODE system. Then, the results are analyzed, compared and discussed. From all the above, the project can be divided into four main parts: 1. Creation of the equation-based electric vehicle model; 2. Programming, simulation and adjustment of the electric vehicle model; 3. Application of co-simulation methodologies to MBS3D and the electric powertrain subsystem; and 4. Code optimization and study of different integrators. Additionally, in order to deeply understand the context of the project, the first chapters include an introduction to basic vehicle dynamics, current classification of hybrid and electric vehicles and an explanation of the involved technologies such as brake energy regeneration, electric and non-electric propulsion systems for EVs and HEVs (hybrid electric vehicles) and their control strategies. Later, the problem of dynamic modeling of hybrid and electric vehicles is discussed. The integrated development environment and the simulation tool are also briefly described. The core chapters include an explanation of the major co-simulation methodologies and how they have been programmed and applied to the electric powertrain model together with the multibody system dynamic model. Finally, the last chapters summarize the main results and conclusions of the project and propose further research topics. In conclusion, co-simulation methodologies are applicable within the integrated development environments MATLAB and Visual Studio, and the simulation tool MBS3D 2.0, where equation-based models of multidisciplinary subsystems, consisting of mechanical and electrical components, are coupled and integrated in a very efficient way.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Allergic eye disease encompasses a group of hypersensitivity disorders which primarily affect the conjunctiva and its prevalence is increasing. It is estimated to affect 8% of patients attending optometric practice but is poorly managed and rarely involves ophthalmic assessment. Seasonal allergic conjunctivitis (SAC) is the most common form of allergic eye disease (90%), followed by perennial allergic conjunctivitis (PAC; 5%). Both are type 1 IgE mediated hypersensitivity reactions where mast cells play an important role in pathophysiology. The signs and symptoms are similar but SAC occurs periodically whereas PAC occurs year round. Despite being a relatively mild condition, the effects on the quality of life can be profound and therefore they demand attention. Primary management of SAC and PAC involves avoidance strategies depending on the responsible allergen(s) to prevent the hypersensitivity reaction. Cooled tear supplements and cold compresses may help bring relief. Pharmacological agents may become necessary as it is not possible to completely avoid the allergen(s). There are a wide range of anti-allergic medications available, such as mast cell stabilisers, antihistamines and dual-action agents. Severe cases refractory to conventional treatment require anti-inflammatories, immunomodulators or immunotherapy. Additional qualifications are required to gain access to these medications, but entry-level optometrists must offer advice and supportive therapy. Based on current evidence, the efficacy of anti-allergic medications appears equivocal so prescribing should relate to patient preference, dosing and cost. More studies with standardised methodologies are necessary elicit the most effective anti-allergic medications but those with dual-actions are likely to be first line agents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study expands the current knowledge base on the nature, causes and fate of unused medicines in primary care. Three methodologies were used and participants for each element were sampled from the population of Eastern Birmingham PCT. A detailed assessment was made of medicines returned to pharmacies and GP surgeries for destruction and a postal questionnaire covering medicines use and disposal was used to patients randomly selected from the electoral roll. The content of this questionnaire was informed by qualitative data from a group interview on the subject. By use of these three methods it was possible to triangulate the data, providing a comprehensive assessment of unused medicines. Unused medicines were found to be ubiquitous in primary care and cardiovascular, diabetic and respiratory medicines are unused in substantial quantities, accounting for a considerable proportion of the total financial value of all unused medicines. Additionally, analgesic and psychoactive medicines were highlighted as being unused in sufficient quantities for concern. Anti-infective medicines also appear to be present and unused in a substantial proportion of patients’ homes. Changes to prescribed therapy and non-compliance were identified as important factors leading to the generation of unused medicines. However, a wide array of other elements influence the quantities and types of medicines that are unused including the concordancy of GP consultations and medication reviews and patient factors such as age, sex or ethnicity. Medicines were appropriately discarded by 1 in 3 patients through return to a medical or pharmaceutical establishment. Inappropriate disposal was by placing in household refuse or through grey and black water with the possibility of hoarding or diversion also being identified. No correlations wre found between the weight of unused medicines and any clinical or financial factor. The study has highlighted unused medicines to be an issue of some concern and one that requires further study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Guest editorial Ali Emrouznejad is a Senior Lecturer at the Aston Business School in Birmingham, UK. His areas of research interest include performance measurement and management, efficiency and productivity analysis as well as data mining. He has published widely in various international journals. He is an Associate Editor of IMA Journal of Management Mathematics and Guest Editor to several special issues of journals including Journal of Operational Research Society, Annals of Operations Research, Journal of Medical Systems, and International Journal of Energy Management Sector. He is in the editorial board of several international journals and co-founder of Performance Improvement Management Software. William Ho is a Senior Lecturer at the Aston University Business School. Before joining Aston in 2005, he had worked as a Research Associate in the Department of Industrial and Systems Engineering at the Hong Kong Polytechnic University. His research interests include supply chain management, production and operations management, and operations research. He has published extensively in various international journals like Computers & Operations Research, Engineering Applications of Artificial Intelligence, European Journal of Operational Research, Expert Systems with Applications, International Journal of Production Economics, International Journal of Production Research, Supply Chain Management: An International Journal, and so on. His first authored book was published in 2006. He is an Editorial Board member of the International Journal of Advanced Manufacturing Technology and an Associate Editor of the OR Insight Journal. Currently, he is a Scholar of the Advanced Institute of Management Research. Uses of frontier efficiency methodologies and multi-criteria decision making for performance measurement in the energy sector This special issue aims to focus on holistic, applied research on performance measurement in energy sector management and for publication of relevant applied research to bridge the gap between industry and academia. After a rigorous refereeing process, seven papers were included in this special issue. The volume opens with five data envelopment analysis (DEA)-based papers. Wu et al. apply the DEA-based Malmquist index to evaluate the changes in relative efficiency and the total factor productivity of coal-fired electricity generation of 30 Chinese administrative regions from 1999 to 2007. Factors considered in the model include fuel consumption, labor, capital, sulphur dioxide emissions, and electricity generated. The authors reveal that the east provinces were relatively and technically more efficient, whereas the west provinces had the highest growth rate in the period studied. Ioannis E. Tsolas applies the DEA approach to assess the performance of Greek fossil fuel-fired power stations taking undesirable outputs into consideration, such as carbon dioxide and sulphur dioxide emissions. In addition, the bootstrapping approach is deployed to address the uncertainty surrounding DEA point estimates, and provide bias-corrected estimations and confidence intervals for the point estimates. The author revealed from the sample that the non-lignite-fired stations are on an average more efficient than the lignite-fired stations. Maethee Mekaroonreung and Andrew L. Johnson compare the relative performance of three DEA-based measures, which estimate production frontiers and evaluate the relative efficiency of 113 US petroleum refineries while considering undesirable outputs. Three inputs (capital, energy consumption, and crude oil consumption), two desirable outputs (gasoline and distillate generation), and an undesirable output (toxic release) are considered in the DEA models. The authors discover that refineries in the Rocky Mountain region performed the best, and about 60 percent of oil refineries in the sample could improve their efficiencies further. H. Omrani, A. Azadeh, S. F. Ghaderi, and S. Abdollahzadeh presented an integrated approach, combining DEA, corrected ordinary least squares (COLS), and principal component analysis (PCA) methods, to calculate the relative efficiency scores of 26 Iranian electricity distribution units from 2003 to 2006. Specifically, both DEA and COLS are used to check three internal consistency conditions, whereas PCA is used to verify and validate the final ranking results of either DEA (consistency) or DEA-COLS (non-consistency). Three inputs (network length, transformer capacity, and number of employees) and two outputs (number of customers and total electricity sales) are considered in the model. Virendra Ajodhia applied three DEA-based models to evaluate the relative performance of 20 electricity distribution firms from the UK and the Netherlands. The first model is a traditional DEA model for analyzing cost-only efficiency. The second model includes (inverse) quality by modelling total customer minutes lost as an input data. The third model is based on the idea of using total social costs, including the firm’s private costs and the interruption costs incurred by consumers, as an input. Both energy-delivered and number of consumers are treated as the outputs in the models. After five DEA papers, Stelios Grafakos, Alexandros Flamos, Vlasis Oikonomou, and D. Zevgolis presented a multiple criteria analysis weighting approach to evaluate the energy and climate policy. The proposed approach is akin to the analytic hierarchy process, which consists of pairwise comparisons, consistency verification, and criteria prioritization. In the approach, stakeholders and experts in the energy policy field are incorporated in the evaluation process by providing an interactive mean with verbal, numerical, and visual representation of their preferences. A total of 14 evaluation criteria were considered and classified into four objectives, such as climate change mitigation, energy effectiveness, socioeconomic, and competitiveness and technology. Finally, Borge Hess applied the stochastic frontier analysis approach to analyze the impact of various business strategies, including acquisition, holding structures, and joint ventures, on a firm’s efficiency within a sample of 47 natural gas transmission pipelines in the USA from 1996 to 2005. The author finds that there were no significant changes in the firm’s efficiency by an acquisition, and there is a weak evidence for efficiency improvements caused by the new shareholder. Besides, the author discovers that parent companies appear not to influence a subsidiary’s efficiency positively. In addition, the analysis shows a negative impact of a joint venture on technical efficiency of the pipeline company. To conclude, we are grateful to all the authors for their contribution, and all the reviewers for their constructive comments, which made this special issue possible. We hope that this issue would contribute significantly to performance improvement of the energy sector.