928 resultados para implementation analysis
Resumo:
Background: Several models have been designed to predict survival of patients with heart failure. These, while available and widely used for both stratifying and deciding upon different treatment options on the individual level, have several limitations. Specifically, some clinical variables that may influence prognosis may have an influence that change over time. Statistical models that include such characteristic may help in evaluating prognosis. The aim of the present study was to analyze and quantify the impact of modeling heart failure survival allowing for covariates with time-varying effects known to be independent predictors of overall mortality in this clinical setting. Methodology: Survival data from an inception cohort of five hundred patients diagnosed with heart failure functional class III and IV between 2002 and 2004 and followed-up to 2006 were analyzed by using the proportional hazards Cox model and variations of the Cox's model and also of the Aalen's additive model. Principal Findings: One-hundred and eighty eight (188) patients died during follow-up. For patients under study, age, serum sodium, hemoglobin, serum creatinine, and left ventricular ejection fraction were significantly associated with mortality. Evidence of time-varying effect was suggested for the last three. Both high hemoglobin and high LV ejection fraction were associated with a reduced risk of dying with a stronger initial effect. High creatinine, associated with an increased risk of dying, also presented an initial stronger effect. The impact of age and sodium were constant over time. Conclusions: The current study points to the importance of evaluating covariates with time-varying effects in heart failure models. The analysis performed suggests that variations of Cox and Aalen models constitute a valuable tool for identifying these variables. The implementation of covariates with time-varying effects into heart failure prognostication models may reduce bias and increase the specificity of such models.
Resumo:
Abstract Background The public health system of Brazil is structured by a network of increasing complexity, but the low resolution of emergency care at pre-hospital units and the lack of organization of patient flow overloaded the hospitals, mainly the ones of higher complexity. The knowledge of this phenomenon induced Ribeirão Preto to implement the Medical Regulation Office and the Mobile Emergency Attendance System. The objective of this study was to analyze the impact of these services on the gravity profile of non-traumatic afflictions in a University Hospital. Methods The study conducted a retrospective analysis of the medical records of 906 patients older than 13 years of age who entered the Emergency Care Unit of the Hospital of the University of São Paulo School of Medicine at Ribeirão Preto. All presented acute non-traumatic afflictions and were admitted to the Internal Medicine, Surgery or Neurology Departments during two study periods: May 1996 (prior to) and May 2001 (after the implementation of the Medical Regulation Office and Mobile Emergency Attendance System). Demographics and mortality risk levels calculated by Acute Physiology and Chronic Health Evaluation II (APACHE II) were determined. Results From 1996 to 2001, the mean age increased from 49 ± 0.9 to 52 ± 0.9 (P = 0.021), as did the percentage of co-morbidities, from 66.6 to 77.0 (P = 0.0001), the number of in-hospital complications from 260 to 284 (P = 0.0001), the mean calculated APACHE II mortality risk increased from 12.0 ± 0.5 to 14.8 ± 0.6 (P = 0.0008) and mortality rate from 6.1 to 12.2 (P = 0.002). The differences were more significant for patients admitted to the Internal Medicine Department. Conclusion The implementation of the Medical Regulation and Mobile Emergency Attendance System contributed to directing patients with higher gravity scores to the Emergency Care Unit, demonstrating the potential of these services for hierarchical structuring of pre-hospital networks and referrals.
Resumo:
The reduction of friction and wear in systems presenting metal-to-metal contacts, as in several mechanical components, represents a traditional challenge in tribology. In this context, this work presents a computational study based on the linear Archard's wear law and finite element modeling (FEM), in order to analyze unlubricated sliding wear observed in typical pin on disc tests. Such modeling was developed using finite element software Abaqus® with 3-D deformable geometries and elastic–plastic material behavior for the contact surfaces. Archard's wear model was implemented into a FORTRAN user subroutine (UMESHMOTION) in order to describe sliding wear. Modeling of debris and oxide formation mechanisms was taken into account by the use of a global wear coefficient obtained from experimental measurements. Such implementation considers an incremental computation for surface wear based on the nodal displacements by means of adaptive mesh tools that rearrange local nodal positions. In this way, the worn track was obtained and new surface profile is integrated for mass loss assessments. This work also presents experimental pin on disc tests with AISI 4140 pins on rotating AISI H13 discs with normal loads of 10, 35, 70 and 140 N, which represent, respectively, mild, transition and severe wear regimes, at sliding speed of 0.1 m/s. Numerical and experimental results were compared in terms of wear rate and friction coefficient. Furthermore, in the numerical simulation the stress field distribution and changes in the surface profile across the worn track of the disc were analyzed. The applied numerical formulation has shown to be more appropriate to predict mild wear regime than severe regime, especially due to the shorter running-in period observed in lower loads that characterizes this kind of regime.
Resumo:
INTRODUCTION: With the aim of searching genetic factors associated with the response to an immune treatment based on autologous monocyte-derived dendritic cells pulsed with autologous inactivated HIV, we performed exome analysis by screening more than 240,000 putative functional exonic variants in 18 HIV-positive Brazilian patients that underwent the immune treatment. METHODS: Exome analysis has been performed using the ILLUMINA Infinium HumanExome BeadChip. zCall algorithm allowed us to recall rare variants. Quality control and SNP-centred analysis were done with GenABEL R package. An in-house implementation of the Wang method permitted gene-centred analysis. RESULTS: CCR4-NOT transcription complex, subunit 1 (CNOT1) gene (16q21), showed the strongest association with the modification of the response to the therapeutic vaccine (p=0.00075). CNOT1 SNP rs7188697 A/G was significantly associated with DC treatment response. The presence of a G allele indicated poor response to the therapeutic vaccine (p=0.0031; OR=33.00; CI=1.74-624.66), and the SNP behaved in a dominant model (A/A vs. A/G+G/G p=0.0009; OR=107.66; 95% CI=3.85-3013.31), being the A/G genotype present only in weak/transient responders, conferring susceptibility to poor response to the immune treatment. DISCUSSION: CNOT1 is known to be involved in the control of mRNA deadenylation and mRNA decay. Moreover, CNOT1 has been recently described as being involved in the regulation of inflammatory processes mediated by tristetraprolin (TTP). The TTP-CCR4-NOT complex (CNOT1 in the CCR4-NOT complex is the binding site for TTP) has been reported as interfering with HIV replication, through post-transcriptional control. Therefore, we can hypothesize that genetic variation occurring in the CNOT1 gene could impair the TTP-CCR4-NOT complex, thus interfering with HIV replication and/or host immune response. CONCLUSIONS: Being aware that our findings are exclusive to the 18 patients studied with a need for replication, and that the genetic variant of CNOT1 gene, localized at intron 3, has no known functional effect, we propose a novel potential candidate locus for the modulation of the response to the immune treatment, and open a discussion on the necessity to consider the host genome as another potential variant to be evaluated when designing an immune therapy study
Resumo:
Programa de doctorado: Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería Instituto Universitario (SIANI)
Resumo:
The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe
Resumo:
The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.
Resumo:
The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.
Resumo:
Clusters have increasingly become an essential part of policy discourses at all levels, EU, national, regional, dealing with regional development, competitiveness, innovation, entrepreneurship, SMEs. These impressive efforts in promoting the concept of clusters on the policy-making arena have been accompanied by much less academic and scientific research work investigating the actual economic performance of firms in clusters, the design and execution of cluster policies and going beyond singular case studies to a more methodologically integrated and comparative approach to the study of clusters and their real-world impact. The theoretical background is far from being consolidated and there is a variety of methodologies and approaches for studying and interpreting this phenomenon while at the same time little comparability among studies on actual cluster performances. The conceptual framework of clustering suggests that they affect performance but theory makes little prediction as to the ultimate distribution of the value being created by clusters. This thesis takes the case of Eastern European countries for two reasons. One is that clusters, as coopetitive environments, are a new phenomenon as the previous centrally-based system did not allow for such types of firm organizations. The other is that, as new EU member states, they have been subject to the increased popularization of the cluster policy approach by the European Commission, especially in the framework of the National Reform Programmes related to the Lisbon objectives. The originality of the work lays in the fact that starting from an overview of theoretical contributions on clustering, it offers a comparative empirical study of clusters in transition countries. There have been very few examples in the literature that attempt to examine cluster performance in a comparative cross-country perspective. It adds to this an analysis of cluster policies and their implementation or lack of such as a way to analyse the way the cluster concept has been introduced to transition economies. Our findings show that the implementation of cluster policies does vary across countries with some countries which have embraced it more than others. The specific modes of implementation, however, are very similar, based mostly on soft measures such as funding for cluster initiatives, usually directed towards the creation of cluster management structures or cluster facilitators. They are essentially founded on a common assumption that the added values of clusters is in the creation of linkages among firms, human capital, skills and knowledge at the local level, most often perceived as the regional level. Often times geographical proximity is not a necessary element in the application process and cluster application are very similar to network membership. Cluster mapping is rarely a factor in the selection of cluster initiatives for funding and the relative question about critical mass and expected outcomes is not considered. In fact, monitoring and evaluation are not elements of the cluster policy cycle which have received a lot of attention. Bulgaria and the Czech Republic are the countries which have implemented cluster policies most decisively, Hungary and Poland have made significant efforts, while Slovakia and Romania have only sporadically and not systematically used cluster initiatives. When examining whether, in fact, firms located within regional clusters perform better and are more efficient than similar firms outside clusters, we do find positive results across countries and across sectors. The only country with negative impact from being located in a cluster is the Czech Republic.
Resumo:
A main objective of the human movement analysis is the quantitative description of joint kinematics and kinetics. This information may have great possibility to address clinical problems both in orthopaedics and motor rehabilitation. Previous studies have shown that the assessment of kinematics and kinetics from stereophotogrammetric data necessitates a setup phase, special equipment and expertise to operate. Besides, this procedure may cause feeling of uneasiness on the subjects and may hinder with their walking. The general aim of this thesis is the implementation and evaluation of new 2D markerless techniques, in order to contribute to the development of an alternative technique to the traditional stereophotogrammetric techniques. At first, the focus of the study has been the estimation of the ankle-foot complex kinematics during stance phase of the gait. Two particular cases were considered: subjects barefoot and subjects wearing ankle socks. The use of socks was investigated in view of the development of the hybrid method proposed in this work. Different algorithms were analyzed, evaluated and implemented in order to have a 2D markerless solution to estimate the kinematics for both cases. The validation of the proposed technique was done with a traditional stereophotogrammetric system. The implementation of the technique leads towards an easy to configure (and more comfortable for the subject) alternative to the traditional stereophotogrammetric system. Then, the abovementioned technique has been improved so that the measurement of knee flexion/extension could be done with a 2D markerless technique. The main changes on the implementation were on occlusion handling and background segmentation. With the additional constraints, the proposed technique was applied to the estimation of knee flexion/extension and compared with a traditional stereophotogrammetric system. Results showed that the knee flexion/extension estimation from traditional stereophotogrammetric system and the proposed markerless system were highly comparable, making the latter a potential alternative for clinical use. A contribution has also been given in the estimation of lower limb kinematics of the children with cerebral palsy (CP). For this purpose, a hybrid technique, which uses high-cut underwear and ankle socks as “segmental markers” in combination with a markerless methodology, was proposed. The proposed hybrid technique is different than the abovementioned markerless technique in terms of the algorithm chosen. Results showed that the proposed hybrid technique can become a simple and low-cost alternative to the traditional stereophotogrammetric systems.
Resumo:
The characteristics of aphasics’ speech in various languages have been the core of numerous studies, but Arabic in general, and Palestinian Arabic in particular, is still a virgin field in this respect. However, it is of vital importance to have a clear picture of the specific aspects of Palestinian Arabic that might be affected in the speech of aphasics in order to establish screening, diagnosis and therapy programs based on a clinical linguistic database. Hence the central questions of this study are what are the main neurolinguistic features of the Palestinian aphasics’ speech at the phonetic-acoustic level and to what extent are the results similar or not to those obtained from other languages. In general, this study is a survey of the most prominent features of Palestinian Broca’s aphasics’ speech. The main acoustic parameters of vowels and consonants are analysed such as vowel duration, formant frequency, Voice Onset Time (VOT), intensity and frication duration. The deviant patterns among the Broca’s aphasics are displayed and compared with those of normal speakers. The nature of deficit, whether phonetic or phonological, is also discussed. Moreover, the coarticulatory characteristics and some prosodic patterns of Broca’s aphasics are addressed. Samples were collected from six Broca’s aphasics from the same local region. The acoustic analysis conducted on a range of consonant and vowel parameters displayed differences between the speech patterns of Broca’s aphasics and normal speakers. For example, impairments in voicing contrast between the voiced and voiceless stops were found in Broca’s aphasics. This feature does not exist for the fricatives produced by the Palestinian Broca’s aphasics and hence deviates from data obtained for aphasics’ speech from other languages. The Palestinian Broca’s aphasics displayed particular problems with the emphatic sounds. They exhibited deviant coarticulation patterns, another feature that is inconsistent with data obtained from studies from other languages. However, several other findings are in accordance with those reported from various other languages such as impairments in the VOT. The results are in accordance with the suggestions that speech production deficits in Broca’s aphasics are not related to phoneme selection but rather to articulatory implementation and some speech output impairments are related to timing and planning deficits.
Resumo:
The main objective of this work was to investigate the impact of different hybridization concepts and levels of hybridization on fuel economy of a standard road vehicle where both conventional and non-conventional hybrid architectures are treated exactly in the same way from the point of view of overall energy flow optimization. Hybrid component models were developed and presented in detail as well as the simulations results mainly for NEDC cycle. The analysis was performed on four different parallel hybrid powertrain concepts: Hybrid Electric Vehicle (HEV), High Speed Flywheel Hybrid Vehicle (HSF-HV), Hydraulic Hybrid Vehicle (HHV) and Pneumatic Hybrid Vehicle (PHV). In order to perform equitable analysis of different hybrid systems, comparison was performed also on the basis of the same usable system energy storage capacity (i.e. 625kJ for HEV, HSF and the HHV) but in the case of pneumatic hybrid systems maximal storage capacity was limited by the size of the systems in order to comply with the packaging requirements of the vehicle. The simulations were performed within the IAV Gmbh - VeLoDyn software simulator based on Matlab / Simulink software package. Advanced cycle independent control strategy (ECMS) was implemented into the hybrid supervisory control unit in order to solve power management problem for all hybrid powertrain solutions. In order to maintain State of Charge within desired boundaries during different cycles and to facilitate easy implementation and recalibration of the control strategy for very different hybrid systems, Charge Sustaining Algorithm was added into the ECMS framework. Also, a Variable Shift Pattern VSP-ECMS algorithm was proposed as an extension of ECMS capabilities so as to include gear selection into the determination of minimal (energy) cost function of the hybrid system. Further, cycle-based energetic analysis was performed in all the simulated cases, and the results have been reported in the corresponding chapters.
Resumo:
The PhD activity described in the document is part of the Microsatellite and Microsystem Laboratory of the II Faculty of Engineering, University of Bologna. The main objective is the design and development of a GNSS receiver for the orbit determination of microsatellites in low earth orbit. The development starts from the electronic design and goes up to the implementation of the navigation algorithms, covering all the aspects that are involved in this type of applications. The use of GPS receivers for orbit determination is a consolidated application used in many space missions, but the development of the new GNSS system within few years, such as the European Galileo, the Chinese COMPASS and the Russian modernized GLONASS, proposes new challenges and offers new opportunities to increase the orbit determination performances. The evaluation of improvements coming from the new systems together with the implementation of a receiver that is compatible with at least one of the new systems, are the main activities of the PhD. The activities can be divided in three section: receiver requirements definition and prototype implementation, design and analysis of the GNSS signal tracking algorithms, and design and analysis of the navigation algorithms. The receiver prototype is based on a Virtex FPGA by Xilinx, and includes a PowerPC processor. The architecture follows the software defined radio paradigm, so most of signal processing is performed in software while only what is strictly necessary is done in hardware. The tracking algorithms are implemented as a combination of Phase Locked Loop and Frequency Locked Loop for the carrier, and Delay Locked Loop with variable bandwidth for the code. The navigation algorithm is based on the extended Kalman filter and includes an accurate LEO orbit model.
Resumo:
This thesis presents SEELF (Sustainable EEL fishery) Index, a methodology for evaluation of European eel (Anguilla anguilla) for the implementation of an effective Eel Management Plan, as defined by EU Regulation No.1100/2007. SEELF uses internal and external indices, age and blood parameters, and selects suitable specimen for restocking; it is also a reliable tool for eel stock management. In fact, SEELF Index, was developed in two versions: SEELF A, to be used in field operations (catch&release, eel status monitoring) and SEELF B to be used for quality control (food production) and research (eel status monitoring). Health status was evaluated also by biomarker analysis (ChE), and data were compared with age of eel. Age determination was performed with otolith reading and fish scale reading and a calibration between the two methods was possible. The study area was the Comacchio lagoon, a brackish coastal lagoon in Italy, well known as an example of suitable environment for eel fishery, where the capability to use the local natural resources has long been a key factor for a successful fishery management. Comacchio lagoon is proposed as an area where an effective EMP can be performed, in agreement with the main features (management of basins, reduction of mortality due to predators,etc.) highlighted for designation of European Restocking Area (ERA). The ERA is a new concept, proposed as a pillar of a new strategy on eel management and conservation. Furthermore, the features of ERAs can be useful in the framework of European Scale Eel Management Plan (ESEMP), proposed as a European scale implementation of EMP, providing a more effectiveness of conservation measures for eel management.
Resumo:
The European External Action Service (EEAS or Service) is one of the most significant and most debated innovations introduced by the Lisbon Treaty. This analysis intends to explain the anomalous design of the EEAS in light of its function, which consists in the promotion of external action coherence. Coherence is a principle of the EU legal system, which requires synergy in the external actions of the Union and its Members. It can be enforced only through the coordination of European policy-makers' initiatives, by bridging the gap between the 'Communitarian' and intergovernmental approaches. This is the 'Union method' envisaged by A. Merkel: "coordinated action in a spirit of solidarity - each of us in the area for which we are responsible but all working towards the same goal". The EEAS embodies the 'Union method', since it is institutionally linked to both Union organs and Member States. It is also capable of enhancing synergy in policy management and promoting unity in international representation, since its field of action is delimited not by an abstract concern for institutional balance but by a pragmatic assessment of the need for coordination in each sector. The challenge is now to make sure that this pragmatic approach is applied with respect to all the activities of the Service, in order to reinforce its effectiveness. The coordination brought by the EEAS is in fact the only means through which a European foreign policy can come into being: the choice is not between the Community method and the intergovernmental method, but between a coordinated position and nothing at all.