22 resultados para non-global solution

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Motion control is a sub-field of automation, in which the position and/or velocity of machines are controlled using some type of device. In motion control the position, velocity, force, pressure, etc., profiles are designed in such a way that the different mechanical parts work as an harmonious whole in which a perfect synchronization must be achieved. The real-time exchange of information in the distributed system that is nowadays an industrial plant plays an important role in order to achieve always better performance, better effectiveness and better safety. The network for connecting field devices such as sensors, actuators, field controllers such as PLCs, regulators, drive controller etc., and man-machine interfaces is commonly called fieldbus. Since the motion transmission is now task of the communication system, and not more of kinematic chains as in the past, the communication protocol must assure that the desired profiles, and their properties, are correctly transmitted to the axes then reproduced or else the synchronization among the different parts is lost with all the resulting consequences. In this thesis, the problem of trajectory reconstruction in the case of an event-triggered communication system is faced. The most important feature that a real-time communication system must have is the preservation of the following temporal and spatial properties: absolute temporal consistency, relative temporal consistency, spatial consistency. Starting from the basic system composed by one master and one slave and passing through systems made up by many slaves and one master or many masters and one slave, the problems in the profile reconstruction and temporal properties preservation, and subsequently the synchronization of different profiles in network adopting an event-triggered communication system, have been shown. These networks are characterized by the fact that a common knowledge of the global time is not available. Therefore they are non-deterministic networks. Each topology is analyzed and the proposed solution based on phase-locked loops adopted for the basic master-slave case has been improved to face with the other configurations.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relation between the intercepted light and orchard productivity was considered linear, although this dependence seems to be more subordinate to planting system rather than light intensity. At whole plant level not always the increase of irradiance determines productivity improvement. One of the reasons can be the plant intrinsic un-efficiency in using energy. Generally in full light only the 5 – 10% of the total incoming energy is allocated to net photosynthesis. Therefore preserving or improving this efficiency becomes pivotal for scientist and fruit growers. Even tough a conspicuous energy amount is reflected or transmitted, plants can not avoid to absorb photons in excess. The chlorophyll over-excitation promotes the reactive species production increasing the photoinhibition risks. The dangerous consequences of photoinhibition forced plants to evolve a complex and multilevel machine able to dissipate the energy excess quenching heat (Non Photochemical Quenching), moving electrons (water-water cycle , cyclic transport around PSI, glutathione-ascorbate cycle and photorespiration) and scavenging the generated reactive species. The price plants must pay for this equipment is the use of CO2 and reducing power with a consequent decrease of the photosynthetic efficiency, both because some photons are not used for carboxylation and an effective CO2 and reducing power loss occurs. Net photosynthesis increases with light until the saturation point, additional PPFD doesn’t improve carboxylation but it rises the efficiency of the alternative pathways in energy dissipation but also ROS production and photoinhibition risks. The wide photo-protective apparatus, although is not able to cope with the excessive incoming energy, therefore photodamage occurs. Each event increasing the photon pressure and/or decreasing the efficiency of the described photo-protective mechanisms (i.e. thermal stress, water and nutritional deficiency) can emphasize the photoinhibition. Likely in nature a small amount of not damaged photosystems is found because of the effective, efficient and energy consuming recovery system. Since the damaged PSII is quickly repaired with energy expense, it would be interesting to investigate how much PSII recovery costs to plant productivity. This PhD. dissertation purposes to improve the knowledge about the several strategies accomplished for managing the incoming energy and the light excess implication on photo-damage in peach. The thesis is organized in three scientific units. In the first section a new rapid, non-intrusive, whole tissue and universal technique for functional PSII determination was implemented and validated on different kinds of plants as C3 and C4 species, woody and herbaceous plants, wild type and Chlorophyll b-less mutant and monocot and dicot plants. In the second unit, using a “singular” experimental orchard named “Asymmetric orchard”, the relation between light environment and photosynthetic performance, water use and photoinhibition was investigated in peach at whole plant level, furthermore the effect of photon pressure variation on energy management was considered on single leaf. In the third section the quenching analysis method suggested by Kornyeyev and Hendrickson (2007) was validate on peach. Afterwards it was applied in the field where the influence of moderate light and water reduction on peach photosynthetic performances, water requirements, energy management and photoinhibition was studied. Using solar energy as fuel for life plant is intrinsically suicidal since the high constant photodamage risk. This dissertation would try to highlight the complex relation existing between plant, in particular peach, and light analysing the principal strategies plants developed to manage the incoming light for deriving the maximal benefits as possible minimizing the risks. In the first instance the new method proposed for functional PSII determination based on P700 redox kinetics seems to be a valid, non intrusive, universal and field-applicable technique, even because it is able to measure in deep the whole leaf tissue rather than the first leaf layers as fluorescence. Fluorescence Fv/Fm parameter gives a good estimate of functional PSII but only when data obtained by ad-axial and ab-axial leaf surface are averaged. In addition to this method the energy quenching analysis proposed by Kornyeyev and Hendrickson (2007), combined with the photosynthesis model proposed by von Caemmerer (2000) is a forceful tool to analyse and study, even in the field, the relation between plant and environmental factors such as water, temperature but first of all light. “Asymmetric” training system is a good way to study light energy, photosynthetic performance and water use relations in the field. At whole plant level net carboxylation increases with PPFD reaching a saturating point. Light excess rather than improve photosynthesis may emphasize water and thermal stress leading to stomatal limitation. Furthermore too much light does not promote net carboxylation improvement but PSII damage, in fact in the most light exposed plants about 50-60% of the total PSII is inactivated. At single leaf level, net carboxylation increases till saturation point (1000 – 1200 μmolm-2s-1) and light excess is dissipated by non photochemical quenching and non net carboxylative transports. The latter follows a quite similar pattern of Pn/PPFD curve reaching the saturation point at almost the same photon flux density. At middle-low irradiance NPQ seems to be lumen pH limited because the incoming photon pressure is not enough to generate the optimum lumen pH for violaxanthin de-epoxidase (VDE) full activation. Peach leaves try to cope with the light excess increasing the non net carboxylative transports. While PPFD rises the xanthophyll cycle is more and more activated and the rate of non net carboxylative transports is reduced. Some of these alternative transports, such as the water-water cycle, the cyclic transport around the PSI and the glutathione-ascorbate cycle are able to generate additional H+ in lumen in order to support the VDE activation when light can be limiting. Moreover the alternative transports seems to be involved as an important dissipative way when high temperature and sub-optimal conductance emphasize the photoinhibition risks. In peach, a moderate water and light reduction does not determine net carboxylation decrease but, diminishing the incoming light and the environmental evapo-transpiration request, stomatal conductance decreases, improving water use efficiency. Therefore lowering light intensity till not limiting levels, water could be saved not compromising net photosynthesis. The quenching analysis is able to partition absorbed energy in the several utilization, photoprotection and photo-oxidation pathways. When recovery is permitted only few PSII remained un-repaired, although more net PSII damage is recorded in plants placed in full light. Even in this experiment, in over saturating light the main dissipation pathway is the non photochemical quenching; at middle-low irradiance it seems to be pH limited and other transports, such as photorespiration and alternative transports, are used to support photoprotection and to contribute for creating the optimal trans-thylakoidal ΔpH for violaxanthin de-epoxidase. These alternative pathways become the main quenching mechanisms at very low light environment. Another aspect pointed out by this study is the role of NPQ as dissipative pathway when conductance becomes severely limiting. The evidence that in nature a small amount of damaged PSII is seen indicates the presence of an effective and efficient recovery mechanism that masks the real photodamage occurring during the day. At single leaf level, when repair is not allowed leaves in full light are two fold more photoinhibited than the shaded ones. Therefore light in excess of the photosynthetic optima does not promote net carboxylation but increases water loss and PSII damage. The more is photoinhibition the more must be the photosystems to be repaired and consequently the energy and dry matter to allocate in this essential activity. Since above the saturation point net photosynthesis is constant while photoinhibition increases it would be interesting to investigate how photodamage costs in terms of tree productivity. An other aspect of pivotal importance to be further widened is the combined influence of light and other environmental parameters, like water status, temperature and nutrition on peach light, water and phtosyntate management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La ricerca oggetto di questa tesi, come si evince dal titolo stesso, è volta alla riduzione dei consumi per vetture a forte carattere sportivo ed elevate prestazioni specifiche. In particolare, tutte le attività descritte fanno riferimento ad un ben definito modello di vettura, ovvero la Maserati Quattroporte. Lo scenario all’interno del quale questo lavoro si inquadra, è quello di una forte spinta alla riduzione dei cosiddetti gas serra, ossia dell’anidride carbonica, in linea con quelle che sono le disposizioni dettate dal protocollo di Kyoto. La necessità di ridurre l’immissione in atmosfera di CO2 sta condizionando tutti i settori della società: dal riscaldamento degli edifici privati a quello degli stabilimenti industriali, dalla generazione di energia ai processi produttivi in senso lato. Nell’ambito di questo panorama, chiaramente, sono chiamati ad uno sforzo considerevole i costruttori di automobili, alle quali è imputata una percentuale considerevole dell’anidride carbonica prodotta ogni giorno e riversata nell’atmosfera. Al delicato problema inquinamento ne va aggiunto uno forse ancor più contingente e diretto, legato a ragioni di carattere economico. I combustibili fossili, come tutti sanno, sono una fonte di energia non rinnovabile, la cui disponibilità è legata a giacimenti situati in opportune zone del pianeta e non inesauribili. Per di più, la situazione socio politica che il medio oriente sta affrontando, unita alla crescente domanda da parte di quei paesi in cui il processo di industrializzazione è partito da poco a ritmi vertiginosi, hanno letteralmente fatto lievitare il prezzo del petrolio. A causa di ciò, avere una vettura efficiente in senso lato e, quindi, a ridotti consumi, è a tutti gli effetti un contenuto di prodotto apprezzato dal punto di vista del marketing, anche per i segmenti vettura più alti. Nell’ambito di questa ricerca il problema dei consumi è stato affrontato come una conseguenza del comportamento globale della vettura in termini di efficienza, valutando il miglior compromesso fra le diverse aree funzionali costituenti il veicolo. Una parte consistente del lavoro è stata dedicata alla messa a punto di un modello di calcolo, attraverso il quale eseguire una serie di analisi di sensibilità sull’influenza dei diversi parametri vettura sul consumo complessivo di carburante. Sulla base di tali indicazioni, è stata proposta una modifica dei rapporti del cambio elettro-attuato con lo scopo di ottimizzare il compromesso tra consumi e prestazioni, senza inficiare considerevolmente queste ultime. La soluzione proposta è stata effettivamente realizzata e provata su vettura, dando la possibilità di verificare i risultati ed operare un’approfondita attività di correlazione del modello di calcolo per i consumi. Il beneficio ottenuto in termini di autonomia è stato decisamente significativo con riferimento sia ai cicli di omologazione europei, che a quelli statunitensi. Sono state inoltre analizzate le ripercussioni dal punto di vista delle prestazioni ed anche in questo caso i numerosi dati rilevati hanno permesso di migliorare il livello di correlazione del modello di simulazione per le prestazioni. La vettura con la nuova rapportatura proposta è stata poi confrontata con un prototipo di Maserati Quattroporte avente cambio automatico e convertitore di coppia. Questa ulteriore attività ha permesso di valutare il differente comportamento tra le due soluzioni, sia in termini di consumo istantaneo, che di consumo complessivo rilevato durante le principali missioni su banco a rulli previste dalle normative. L’ultima sezione del lavoro è stata dedicata alla valutazione dell’efficienza energetica del sistema vettura, intesa come resistenza all’avanzamento incontrata durante il moto ad una determinata velocità. Sono state indagate sperimentalmente le curve di “coast down” della Quattroporte e di alcune concorrenti e sono stati proposti degli interventi volti alla riduzione del coefficiente di penetrazione aerodinamica, pur con il vincolo di non alterare lo stile vettura.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Self-organisation is increasingly being regarded as an effective approach to tackle modern systems complexity. The self-organisation approach allows the development of systems exhibiting complex dynamics and adapting to environmental perturbations without requiring a complete knowledge of the future surrounding conditions. However, the development of self-organising systems (SOS) is driven by different principles with respect to traditional software engineering. For instance, engineers typically design systems combining smaller elements where the composition rules depend on the reference paradigm, but typically produce predictable results. Conversely, SOS display non-linear dynamics, which can hardly be captured by deterministic models, and, although robust with respect to external perturbations, are quite sensitive to changes on inner working parameters. In this thesis, we describe methodological aspects concerning the early-design stage of SOS built relying on the Multiagent paradigm: in particular, we refer to the A&A metamodel, where MAS are composed by agents and artefacts, i.e. environmental resources. Then, we describe an architectural pattern that has been extracted from a recurrent solution in designing self-organising systems: this pattern is based on a MAS environment formed by artefacts, modelling non-proactive resources, and environmental agents acting on artefacts so as to enable self-organising mechanisms. In this context, we propose a scientific approach for the early design stage of the engineering of self-organising systems: the process is an iterative one and each cycle is articulated in four stages, modelling, simulation, formal verification, and tuning. During the modelling phase we mainly rely on the existence of a self-organising strategy observed in Nature and, hopefully encoded as a design pattern. Simulations of an abstract system model are used to drive design choices until the required quality properties are obtained, thus providing guarantees that the subsequent design steps would lead to a correct implementation. However, system analysis exclusively based on simulation results does not provide sound guarantees for the engineering of complex systems: to this purpose, we envision the application of formal verification techniques, specifically model checking, in order to exactly characterise the system behaviours. During the tuning stage parameters are tweaked in order to meet the target global dynamics and feasibility constraints. In order to evaluate the methodology, we analysed several systems: in this thesis, we only describe three of them, i.e. the most representative ones for each of the three years of PhD course. We analyse each case study using the presented method, and describe the exploited formal tools and techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work deals with some classes of linear second order partial differential operators with non-negative characteristic form and underlying non- Euclidean structures. These structures are determined by families of locally Lipschitz-continuous vector fields in RN, generating metric spaces of Carnot- Carath´eodory type. The Carnot-Carath´eodory metric related to a family {Xj}j=1,...,m is the control distance obtained by minimizing the time needed to go from two points along piecewise trajectories of vector fields. We are mainly interested in the causes in which a Sobolev-type inequality holds with respect to the X-gradient, and/or the X-control distance is Doubling with respect to the Lebesgue measure in RN. This study is divided into three parts (each corresponding to a chapter), and the subject of each one is a class of operators that includes the class of the subsequent one. In the first chapter, after recalling “X-ellipticity” and related concepts introduced by Kogoj and Lanconelli in [KL00], we show a Maximum Principle for linear second order differential operators for which we only assume a Sobolev-type inequality together with a lower terms summability. Adding some crucial hypotheses on measure and on vector fields (Doubling property and Poincar´e inequality), we will be able to obtain some Liouville-type results. This chapter is based on the paper [GL03] by Guti´errez and Lanconelli. In the second chapter we treat some ultraparabolic equations on Lie groups. In this case RN is the support of a Lie group, and moreover we require that vector fields satisfy left invariance. After recalling some results of Cinti [Cin07] about this class of operators and associated potential theory, we prove a scalar convexity for mean-value operators of L-subharmonic functions, where L is our differential operator. In the third chapter we prove a necessary and sufficient condition of regularity, for boundary points, for Dirichlet problem on an open subset of RN related to sub-Laplacian. On a Carnot group we give the essential background for this type of operator, and introduce the notion of “quasi-boundedness”. Then we show the strict relationship between this notion, the fundamental solution of the given operator, and the regularity of the boundary points.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Supramolecular architectures can be built-up from a single molecular component (building block) to obtain a complex of organic or inorganic interactions creating a new emergent condensed phase of matter, such as gels, liquid crystals and solid crystal. Further the generation of multicomponent supramolecular hybrid architecture, a mix of organic and inorganic components, increases the complexity of the condensed aggregate with functional properties useful for important areas of research, like material science, medicine and nanotechnology. One may design a molecule storing a recognition pattern and programming a informed self-organization process enables to grow-up into a hierarchical architecture. From a molecular level to a supramolecular level, in a bottom-up fashion, it is possible to create a new emergent structure-function, where the system, as a whole, is open to its own environment to exchange energy, matter and information. “The emergent property of the whole assembly is superior to the sum of a singles parts”. In this thesis I present new architectures and functional materials built through the selfassembly of guanosine, in the absence or in the presence of a cation, in solution and on the surface. By appropriate manipulation of intermolecular non-covalent interactions the spatial (structural) and temporal (dynamic) features of these supramolecular architectures are controlled. Guanosine G7 (5',3'-di-decanoil-deoxi-guanosine) is able to interconvert reversibly between a supramolecular polymer and a discrete octameric species by dynamic cation binding and release. Guanosine G16 (2',3'-O-Isopropylidene-5'-O-decylguanosine) shows selectivity binding from a mix of different cation's nature. Remarkably, reversibility, selectivity, adaptability and serendipity are mutual features to appreciate the creativity of a molecular self-organization complex system into a multilevelscale hierarchical growth. The creativity - in general sense, the creation of a new thing, a new thinking, a new functionality or a new structure - emerges from a contamination process of different disciplines such as biology, chemistry, physics, architecture, design, philosophy and science of complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Water distribution networks optimization is a challenging problem due to the dimension and the complexity of these systems. Since the last half of the twentieth century this field has been investigated by many authors. Recently, to overcome discrete nature of variables and non linearity of equations, the research has been focused on the development of heuristic algorithms. This algorithms do not require continuity and linearity of the problem functions because they are linked to an external hydraulic simulator that solve equations of mass continuity and of energy conservation of the network. In this work, a NSGA-II (Non-dominating Sorting Genetic Algorithm) has been used. This is a heuristic multi-objective genetic algorithm based on the analogy of evolution in nature. Starting from an initial random set of solutions, called population, it evolves them towards a front of solutions that minimize, separately and contemporaneously, all the objectives. This can be very useful in practical problems where multiple and discordant goals are common. Usually, one of the main drawback of these algorithms is related to time consuming: being a stochastic research, a lot of solutions must be analized before good ones are found. Results of this thesis about the classical optimal design problem shows that is possible to improve results modifying the mathematical definition of objective functions and the survival criterion, inserting good solutions created by a Cellular Automata and using rules created by classifier algorithm (C4.5). This part has been tested using the version of NSGA-II supplied by Centre for Water Systems (University of Exeter, UK) in MATLAB® environment. Even if orientating the research can constrain the algorithm with the risk of not finding the optimal set of solutions, it can greatly improve the results. Subsequently, thanks to CINECA help, a version of NSGA-II has been implemented in C language and parallelized: results about the global parallelization show the speed up, while results about the island parallelization show that communication among islands can improve the optimization. Finally, some tests about the optimization of pump scheduling have been carried out. In this case, good results are found for a small network, while the solutions of a big problem are affected by the lack of constraints on the number of pump switches. Possible future research is about the insertion of further constraints and the evolution guide. In the end, the optimization of water distribution systems is still far from a definitive solution, but the improvement in this field can be very useful in reducing the solutions cost of practical problems, where the high number of variables makes their management very difficult from human point of view.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This doctoral work gains deeper insight into the dynamics of knowledge flows within and across clusters, unfolding their features, directions and strategic implications. Alliances, networks and personnel mobility are acknowledged as the three main channels of inter-firm knowledge flows, thus offering three heterogeneous measures to analyze the phenomenon. The interplay between the three channels and the richness of available research methods, has allowed for the elaboration of three different papers and perspectives. The common empirical setting is the IT cluster in Bangalore, for its distinguished features as a high-tech cluster and for its steady yearly two-digit growth around the service-based business model. The first paper deploys both a firm-level and a tie-level analysis, exploring the cases of 4 domestic companies and of 2 MNCs active the cluster, according to a cluster-based perspective. The distinction between business-domain knowledge and technical knowledge emerges from the qualitative evidence, further confirmed by quantitative analyses at tie-level. At firm-level, the specialization degree seems to be influencing the kind of knowledge shared, while at tie-level both the frequency of interaction and the governance mode prove to determine differences in the distribution of knowledge flows. The second paper zooms out and considers the inter-firm networks; particularly focusing on the role of cluster boundary, internal and external networks are analyzed, in their size, long-term orientation and exploration degree. The research method is purely qualitative and allows for the observation of the evolving strategic role of internal network: from exploitation-based to exploration-based. Moreover, a causal pattern is emphasized, linking the evolution and features of the external network to the evolution and features of internal network. The final paper addresses the softer and more micro-level side of knowledge flows: personnel mobility. A social capital perspective is here developed, which considers both employees’ acquisition and employees’ loss as building inter-firm ties, thus enhancing company’s overall social capital. Negative binomial regression analyses at dyad-level test the significant impact of cluster affiliation (cluster firms vs non-cluster firms), industry affiliation (IT firms vs non-IT fims) and foreign affiliation (MNCs vs domestic firms) in shaping the uneven distribution of personnel mobility, and thus of knowledge flows, among companies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Until recently the debate on the ontology of spacetime had only a philosophical significance, since, from a physical point of view, General Relativity has been made "immune" to the consequences of the "Hole Argument" simply by reducing the subject to the assertion that solutions of Einstein equations which are mathematically different and related by an active diffeomorfism are physically equivalent. From a technical point of view, the natural reading of the consequences of the "Hole Argument” has always been to go further and say that the mathematical representation of spacetime in General Relativity inevitably contains a “superfluous structure” brought to light by the gauge freedom of the theory. This position of apparent split between the philosophical outcome and the physical one has been corrected thanks to a meticulous and complicated formal analysis of the theory in a fundamental and recent (2006) work by Luca Lusanna and Massimo Pauri entitled “Explaining Leibniz equivalence as difference of non-inertial appearances: dis-solution of the Hole Argument and physical individuation of point-events”. The main result of this article is that of having shown how, from a physical point of view, point-events of Einstein empty spacetime, in a particular class of models considered by them, are literally identifiable with the autonomous degrees of freedom of the gravitational field (the Dirac observables, DO). In the light of philosophical considerations based on realism assumptions of the theories and entities, the two authors then conclude by saying that spacetime point-events have a degree of "weak objectivity", since they, depending on a NIF (non-inertial frame), unlike the points of the homogeneous newtonian space, are plunged in a rich and complex non-local holistic structure provided by the “ontic part” of the metric field. Therefore according to the complex structure of spacetime that General Relativity highlights and within the declared limits of a methodology based on a Galilean scientific representation, we can certainly assert that spacetime has got "elements of reality", but the inevitably relational elements that are in the physical detection of point-events in the vacuum of matter (highlighted by the “ontic part” of the metric field, the DO) are closely dependent on the choice of the global spatiotemporal laboratory where the dynamics is expressed (NIF). According to the two authors, a peculiar kind of structuralism takes shape: the point structuralism, with common features both of the absolutist and substantival tradition and of the relationalist one. The intention of this thesis is that of proposing a method of approaching the problem that is, at least at the beginning, independent from the previous ones, that is to propose an approach based on the possibility of describing the gravitational field at three distinct levels. In other words, keeping the results achieved by the work of Lusanna and Pauri in mind and following their underlying philosophical assumptions, we intend to partially converge to their structuralist approach, but starting from what we believe is the "foundational peculiarity" of General Relativity, which is that characteristic inherent in the elements that constitute its formal structure: its essentially geometric nature as a theory considered regardless of the empirical necessity of the measure theory. Observing the theory of General Relativity from this perspective, we can find a "triple modality" for describing the gravitational field that is essentially based on a geometric interpretation of the spacetime structure. The gravitational field is now "visible" no longer in terms of its autonomous degrees of freedom (the DO), which, in fact, do not have a tensorial and, therefore, nor geometric nature, but it is analyzable through three levels: a first one, called the potential level (which the theory identifies with the components of the metric tensor), a second one, known as the connections level (which in the theory determine the forces acting on the mass and, as such, offer a level of description related to the one that the newtonian gravitation provides in terms of components of the gravitational field) and, finally, a third level, that of the Riemann tensor, which is peculiar to General Relativity only. Focusing from the beginning on what is called the "third level" seems to present immediately a first advantage: to lead directly to a description of spacetime properties in terms of gauge-invariant quantites, which allows to "short circuit" the long path that, in the treatises analyzed, leads to identify the "ontic part” of the metric field. It is then shown how to this last level it is possible to establish a “primitive level of objectivity” of spacetime in terms of the effects that matter exercises in extended domains of spacetime geometrical structure; these effects are described by invariants of the Riemann tensor, in particular of its irreducible part: the Weyl tensor. The convergence towards the affirmation by Lusanna and Pauri that the existence of a holistic, non-local and relational structure from which the properties quantitatively identified of point-events depend (in addition to their own intrinsic detection), even if it is obtained from different considerations, is realized, in our opinion, in the assignment of a crucial role to the degree of curvature of spacetime that is defined by the Weyl tensor even in the case of empty spacetimes (as in the analysis conducted by Lusanna and Pauri). In the end, matter, regarded as the physical counterpart of spacetime curvature, whose expression is the Weyl tensor, changes the value of this tensor even in spacetimes without matter. In this way, going back to the approach of Lusanna and Pauri, it affects the DOs evolution and, consequently, the physical identification of point-events (as our authors claim). In conclusion, we think that it is possible to see the holistic, relational, and non-local structure of spacetime also through the "behavior" of the Weyl tensor in terms of the Riemann tensor. This "behavior" that leads to geometrical effects of curvature is characterized from the beginning by the fact that it concerns extensive domains of the manifold (although it should be pointed out that the values of the Weyl tensor change from point to point) by virtue of the fact that the action of matter elsewhere indefinitely acts. Finally, we think that the characteristic relationality of spacetime structure should be identified in this "primitive level of organization" of spacetime.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To investigate the prognostic significance of ST-segment elevation (STE) in aVR associated with ST-segment depression (STD) in other leads in patients with non-STE acute coronary syndrome (NSTE-ACS). Background: In NSTE-ACS patients, STD has been extensively associated with severe coronary lesions and poor outcomes. The prognostic role of STE in aVR is uncertain. Methods: We enrolled 888 consecutive patients with NSTE-ACS. They were divided into two groups according to the presence or not on admission ECG of aVR STE≥ 1mm and STD (defined as high risk ECG pattern). The primary and secondary endpoints were: in-hospital cardiovascular (CV) death and the rate of culprit left main disease (LMD). Results: Patients with high risk ECG pattern (n=121) disclosed a worse clinical profile compared to patients (n=575) without [median GRACE (Global-Registry-of-Acute-Coronary-Events) risk score =142 vs. 182, respectively]. A total of 75% of patients underwent coronary angiography. The rate of in-hospital CV death was 3.9%. On multivariable analysis patients who had the high risk ECG pattern showed an increased risk of CV death (OR=2.88, 95%CI 1.05-7.88) and culprit LMD (OR=4.67,95%CI 1.86-11.74) compared to patients who had not. The prognostic significance of the high risk ECG pattern was maintained even after adjustment for the GRACE risk score (OR = 2.28, 95%CI:1.06-4.93 and OR = 4.13, 95%CI:2.13-8.01, for primary and secondary endpoint, respectively). Conclusions: STE in aVR associated with STD in other leads predicts in-hospital CV death and culprit LMD. This pattern may add prognostic information in patients with NSTE-ACS on top of recommended scoring system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this Doctoral Thesis is to develop a genetic algorithm based optimization methods to find the best conceptual design architecture of an aero-piston-engine, for given design specifications. Nowadays, the conceptual design of turbine airplanes starts with the aircraft specifications, then the most suited turbofan or turbo propeller for the specific application is chosen. In the aeronautical piston engines field, which has been dormant for several decades, as interest shifted towards turboaircraft, new materials with increased performance and properties have opened new possibilities for development. Moreover, the engine’s modularity given by the cylinder unit, makes it possible to design a specific engine for a given application. In many real engineering problems the amount of design variables may be very high, characterized by several non-linearities needed to describe the behaviour of the phenomena. In this case the objective function has many local extremes, but the designer is usually interested in the global one. The stochastic and the evolutionary optimization techniques, such as the genetic algorithms method, may offer reliable solutions to the design problems, within acceptable computational time. The optimization algorithm developed here can be employed in the first phase of the preliminary project of an aeronautical piston engine design. It’s a mono-objective genetic algorithm, which, starting from the given design specifications, finds the engine propulsive system configuration which possesses minimum mass while satisfying the geometrical, structural and performance constraints. The algorithm reads the project specifications as input data, namely the maximum values of crankshaft and propeller shaft speed and the maximal pressure value in the combustion chamber. The design variables bounds, that describe the solution domain from the geometrical point of view, are introduced too. In the Matlab® Optimization environment the objective function to be minimized is defined as the sum of the masses of the engine propulsive components. Each individual that is generated by the genetic algorithm is the assembly of the flywheel, the vibration damper and so many pistons, connecting rods, cranks, as the number of the cylinders. The fitness is evaluated for each individual of the population, then the rules of the genetic operators are applied, such as reproduction, mutation, selection, crossover. In the reproduction step the elitist method is applied, in order to save the fittest individuals from a contingent mutation and recombination disruption, making it undamaged survive until the next generation. Finally, as the best individual is found, the optimal dimensions values of the components are saved to an Excel® file, in order to build a CAD-automatic-3D-model for each component of the propulsive system, having a direct pre-visualization of the final product, still in the engine’s preliminary project design phase. With the purpose of showing the performance of the algorithm and validating this optimization method, an actual engine is taken, as a case study: it’s the 1900 JTD Fiat Avio, 4 cylinders, 4T, Diesel. Many verifications are made on the mechanical components of the engine, in order to test their feasibility and to decide their survival through generations. A system of inequalities is used to describe the non-linear relations between the design variables, and is used for components checking for static and dynamic loads configurations. The design variables geometrical boundaries are taken from actual engines data and similar design cases. Among the many simulations run for algorithm testing, twelve of them have been chosen as representative of the distribution of the individuals. Then, as an example, for each simulation, the corresponding 3D models of the crankshaft and the connecting rod, have been automatically built. In spite of morphological differences among the component the mass is almost the same. The results show a significant mass reduction (almost 20% for the crankshaft) in comparison to the original configuration, and an acceptable robustness of the method have been shown. The algorithm here developed is shown to be a valid method for an aeronautical-piston-engine preliminary project design optimization. In particular the procedure is able to analyze quite a wide range of design solutions, rejecting the ones that cannot fulfill the feasibility design specifications. This optimization algorithm could increase the aeronautical-piston-engine development, speeding up the production rate and joining modern computation performances and technological awareness to the long lasting traditional design experiences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hydrothermal fluids are a fundamental resource for understanding and monitoring volcanic and non-volcanic systems. This thesis is focused on the study of hydrothermal system through numerical modeling with the geothermal simulator TOUGH2. Several simulations are presented, and geophysical and geochemical observables, arising from fluids circulation, are analyzed in detail throughout the thesis. In a volcanic setting, fluids feeding fumaroles and hot spring may play a key role in the hazard evaluation. The evolution of the fluids circulation is caused by a strong interaction between magmatic and hydrothermal systems. A simultaneous analysis of different geophysical and geochemical observables is a sound approach for interpreting monitored data and to infer a consistent conceptual model. Analyzed observables are ground displacement, gravity changes, electrical conductivity, amount, composition and temperature of the emitted gases at surface, and extent of degassing area. Results highlight the different temporal response of the considered observables, as well as the different radial pattern of variation. However, magnitude, temporal response and radial pattern of these signals depend not only on the evolution of fluid circulation, but a main role is played by the considered rock properties. Numerical simulations highlight differences that arise from the assumption of different permeabilities, for both homogeneous and heterogeneous systems. Rock properties affect hydrothermal fluid circulation, controlling both the range of variation and the temporal evolution of the observable signals. Low temperature fumaroles and low discharge rate may be affected by atmospheric conditions. Detailed parametric simulations were performed, aimed to understand the effects of system properties, such as permeability and gas reservoir overpressure, on diffuse degassing when air temperature and barometric pressure changes are applied to the ground surface. Hydrothermal circulation, however, is not only a characteristic of volcanic system. Hot fluids may be involved in several mankind problems, such as studies on geothermal engineering, nuclear waste propagation in porous medium, and Geological Carbon Sequestration (GCS). The current concept for large-scale GCS is the direct injection of supercritical carbon dioxide into deep geological formations which typically contain brine. Upward displacement of such brine from deep reservoirs driven by pressure increases resulting from carbon dioxide injection may occur through abandoned wells, permeable faults or permeable channels. Brine intrusion into aquifers may degrade groundwater resources. Numerical results show that pressure rise drives dense water up to the conduits, and does not necessarily result in continuous flow. Rather, overpressure leads to new hydrostatic equilibrium if fluids are initially density stratified. If warm and salty fluid does not cool passing through the conduit, an oscillatory solution is then possible. Parameter studies delineate steady-state (static) and oscillatory solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’argomento scelto riguarda l’adozione di standard privati da parte di imprese agro-alimentari e le loro conseguenze sulla gestione globale dell’azienda. In particolare, lo scopo di questo lavoro è quello di valutare le implicazioni dovute all’adozione del BRC Global Standard for Food Safety da parte delle imprese agro-alimentari italiane. La valutazione di tale impatto è basata sulle percezioni dei responsabili aziendali in merito ad aspetti economici, gestionali, commerciali, qualitativi, organizzativi. La ricerca ha seguito due passaggi fondamentali: innanzitutto sono state condotte 7 interviste in profondità con i Responsabili Qualità (RQ) di aziende agro-alimentari italiane certificate BRC Food. Le variabili estrapolate dall’analisi qualitativa del contenuto delle interviste sono state inserite, insieme a quelle rilevate in letteratura, nel questionario creato per la successiva survey. Il questionario è stato inviato tramite e-mail e con supporto telefonico ad un campione di aziende selezionato tramite campionamento random. Dopo un periodo di rilevazione prestabilito, sono stati compilati 192 questionari. L’analisi descrittiva dei dati mostra che i RQ sono in buona parte d’accordo con le affermazioni riguardanti gli elementi d’impatto. Le affermazioni maggiormente condivise riguardano: efficienza del sistema HACCP, efficienza del sistema di rintracciabilità, procedure di controllo, formazione del personale, miglior gestione delle urgenze e non conformità, miglior implementazione e comprensione di altri sistemi di gestione certificati. Attraverso l’analisi ANOVA fra variabili qualitative e quantitative e relativo test F emerge che alcune caratteristiche delle aziende, come l’area geografica, la dimensione aziendale, la categoria di appartenenza e il tipo di situazione nei confronti della ISO 9001 possono influenzare differentemente le opinioni degli intervistati. Successivamente attraverso un’analisi fattoriale sono stati estratti 8 fattori partendo da un numero iniziale di 28 variabili. Sulla base dei fattori è stata applicata la cluster analysis di tipo gerarchico che ha portato alla segmentazione del campione in 5 gruppi diversi. Ogni gruppo è stato interpretato sulla base di un profilo determinato dal posizionamento nei confronti dei vari fattori. I risultati oltre ad essere stati validati attraverso focus group effettuati con ricercatori ed operatori del settore, sono stati supportati anche da una successiva indagine qualitativa condotta presso 4 grandi retailer inglesi. Lo scopo di questa successiva indagine è stato quello di valutare l’esistenza di opinioni divergenti nei confronti dei fornitori che andasse quindi a sostenere l’ipotesi di un problema di asimmetria informativa che nonostante la presenza di standard privati ancora sussiste nelle principali relazioni contrattuali. Ulteriori percorsi di ricerca potrebbero stimare se la valutazione dell’impatto del BRC può aiutare le aziende di trasformazione nell’implementazione di altri standard di qualità e valutare quali variabili possono influenzare invece le percezioni in termini di costi dell’adozione dello standard.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’elaborato, dopo una ricostruzione della disciplina normativa presente in materia di contratto a tempo determinato in Italia e nei principali ordinamenti europei (Spagna, Francia ed Inghilterra), affronta i più rilevanti nodi problematici dell’istituto, in riferimento al settore privato e pubblico, mettendo in luce le principali querelle dottrinali e giurisprudenziali. Particolare attenzione è dedicata alle questioni insorte a seguito delle ultime modifiche normative di cui al c.d. Collegato lavoro (legge n. 183/2010), sino al decisivo intervento della Corte Costituzionale, con pronuncia n. 303 del 9 novembre 2011, che ha dichiarato legittima la disposizione introduttiva dell’indennità risarcitoria forfetizzata, aggiuntiva rispetto alla conversione del contratto. Tutte le problematiche trattate hanno evidenziato le difficoltà per le Corti Superiori, così come per i giudici comunitari e nazionali, di trovare una linea univoca e condivisa nella risoluzione delle controversie presenti in materia. L’elaborato si chiude con alcune riflessioni sui temi della flessibilità e precarietà nel mondo del lavoro, attraverso una valutazione quantitativa e qualitativa dell’istituto, nell’intento di fornire una risposta ad alcuni interrogativi: la flessibilità è necessariamente precarietà o può essere letta quale forma speciale di occupazione? Quali sono i possibili antidoti alla precarietà? In conclusione, è emerso come la flessibilità possa rappresentare un problema per le imprese e per i lavoratori soltanto nel lungo periodo. La soluzione è stata individuata nell’opportunità di investire sulla formazione. Si è così ipotizzata una nuova «flessibilità socialmente ed economicamente sostenibile», da realizzarsi tramite l’ausilio delle Regioni e, quindi, dei contributi del Fondo europeo di sviluppo regionale: al lavoratore, in tal modo, potrà essere garantita la continuità con il lavoro tramite percorsi formativi mirati e, d’altro canto, il datore di lavoro non dovrà farsi carico dei costi per la formazione dei dipendenti a tempo determinato.