908 resultados para mortgagees powers and duties


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: The aim of this study was to assess the influence of curing time and power on the degree of conversion and surface microhardness of 3 orthodontic composites. Methods: One hundred eighty discs, 6 mm in diameter, were divided into 3 groups of 60 samples according to the composite used-Transbond XT (3M Unitek, Monrovia, Calif), Opal Bond MV (Ultradent, South Jordan, Utah), and Transbond Plus Color Change (3M Unitek)-and each group was further divided into 3 subgroups (n = 20). Five samples were used to measure conversion, and 15 were used to measure microhardness. A light-emitting diode curing unit with multiwavelength emission of broad light was used for curing at 3 power levels (530, 760, and 1520 mW) and 3 times (8.5, 6, and 3 seconds), always totaling 4.56 joules. Five specimens from each subgroup were ground and mixed with potassium bromide to produce 8-mm tablets to be compared with 5 others made similarly with the respective noncured composite. These were placed into a spectrometer, and software was used for analysis. A microhardness tester was used to take Knoop hardness (KHN) measurements in 15 discs of each subgroup. The data were analyzed with 2 analysis of variance tests at 2 levels. Results: Differences were found in the conversion degree of the composites cured at different times and powers (P < 0.01). The composites showed similar degrees of conversion when light cured at 8.5 seconds (80.7%) and 6 seconds (79.0%), but not at 3 seconds (75.0%). The conversion degrees of the composites were different, with group 3 (87.2%) higher than group 2 (83.5%), which was higher than group 1 (64.0%). Differences in microhardness were also found (P < 0.01), with lower microhardness at 8.5 seconds (35.2 KHN), but no difference was observed between 6 seconds (41.6 KHN) and 3 seconds (42.8 KHN). Group 3 had the highest surface microhardness (35.9 KHN) compared with group 2 (33.7 KHN) and group 1 (30.0 KHN). Conclusions: Curing time can be reduced up to 6 seconds by increasing the power, with a slight decrease in the degree of conversion at 3 seconds; the decrease has a positive effect on the surface microhardness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An important characteristic of the current international setting is the crisis of the structure in existence, rather than the emergence of a new order. The rise of new interests and demands, as well as the speed of the transformation make the current understanding of global governance more complex. Brazil, like other medium powers, has an interest in institutionalised multilateralism as a means of increasing its bargaining capacity and hindering the unilateralism of major powers, without being antagonistic to them. It is attempting to increase its weight in traditional international bodies, which provide the grounding for international legitimacy, as well as in new informal arrangements. While this strategy could lead to the establishment of a new hierarchy that brings in countries of growing relative importance, it has put the weight of regional integration into another perspective in Brazilian foreign policy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the pattern recognition research field, Support Vector Machines (SVM) have been an effectiveness tool for classification purposes, being successively employed in many applications. The SVM input data is transformed into a high dimensional space using some kernel functions where linear separation is more likely. However, there are some computational drawbacks associated to SVM. One of them is the computational burden required to find out the more adequate parameters for the kernel mapping considering each non-linearly separable input data space, which reflects the performance of SVM. This paper introduces the Polynomial Powers of Sigmoid for SVM kernel mapping, and it shows their advantages over well-known kernel functions using real and synthetic datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

What is the most mysterious feelings in mankind? What is the most general way by which every mankind is subject to change. Well monotonous, and steady life where someone does the same kind of work at same pace in same time, without any change of events, creates the burdensome feeling in a person. Yet people are unaware of it. We don't know where are we going, yet we go at the same pace, since in practical sense we are getting some credit for something. But the thing how much do we keep up, and maintain the balance in nature, so that we can really feel the value and the quality of life, and know the exact happiness of fulfilling the work. But if its totally becomes materialistic, and emotional aspect goes away from human beings, where social talking never goes away from your duties. Then!!!! ... I mean there is no need of human beings. If we are so developed, right from past few centuries and our domination, had created a modern social world, then we can create all kinds of devices and software which can handle all the works for which the people don't have to stay overtime just to finish that. I mean in that way human life will move through some progress, and will have some development. But the environment where we are living nothing new or development is taking place as the work becomes a duty for human being but not to machine which should be given some task to be completed.So all those devices are not in the society, since human being is already taking charge if it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Livestock Waste Management Act requires all livestock operations with 300 animal units or more to be inspected by the Nebraska Department of Environmental Quality (DEQ) to determine whether livestock wastes contaminate surface or ground water. This NebFact discusses the following parts of the Livestock Waste Management Act: Act (how cited); Terms (defined); Livestock operation, exemption, livestock waste control facility, permit, restriction; Construction permit or operating permit (when required), livestock waste control facilities, classification, restrictions; Section (how construed); Cold water class A streams (designation); Permit (acknowledgment required); Livestock operation (request inspection, when, fees, department, duties); Permits (duration, modification); Permit (application and modification fees, Livestock Waste Management Cash Fund (created, use, investment, report, legislative intent); Applicant (rejection, grounds, application, information required, certification required); Postconstruction inspection requirement; Department (contracts authorized, permit application, notice required); Permit application (approval from Department of Natural Resources and Department of Environmental Quality, powers); Council (rules and regulations); and Enforcement of act (legislative intent).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the local power of the likelihood ratio, Wald, score and gradient tests under the presence of a scalar parameter, phi say, that is orthogonal to the remaining parameters. We show that some of the coefficients that define the local powers remain unchanged regardless of whether phi is known or needs to be estimated, where as the others can be written as the sum of two terms, the first of which being the corresponding term obtained as if phi were known, and the second, an additional term yielded by the fact that phi is unknown. The contribution of each set of parameters on the local powers of the tests can then be examined. Various implications of our main result are stated and discussed. Several examples are presented for illustrative purposes

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study raised the hypothesis that the trophic status in a tropical coastal food web from southeastern Brazil can be measured by the relation between total mercury (THg) and nitrogen isotope (delta(15)N) in their components. The analysed species were grouped into six trophic positions: primary producer (phytoplankton), primary consumer (zooplankton), consumer 1 (omnivore shrimp), consumer 2 (pelagic carnivores represented by squid and fish species), consumer 3 (demersal carnivores represented by fish species) and consumer 4 (pelagic-demersal top carnivore represented by the fish Trichiurus lepturus). The values of THg, delta(15)N, and trophic level (TLv) increased significantly from primary producer toward top carnivore. Our data regarding trophic magnification (6.84) and biomagnification powers (0.25 for delta(15)N and 0.83 for TLv) indicated that Hg biomagnification throughout trophic positions is high in this tropical food web, which could be primarily related to the quality of the local water.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Nd:YLF/KGW Raman laser has been investigated in this work. We have demonstrated CW output powers at six different wavelengths, 1147 nm (0.70 W), 1163 nm (0.95 W), 549 nm (0.65 W), 552 nm (1.90 W), 573 nm (0.60 W) and 581 nm (1.10 W), with higher peak powers achieved under quasi-CW operation. Raman conversion of the 1053 nm fundamental emission is reported for the first time, enabling two new wavelengths in crystalline Raman lasers, 549 nm and 552 nm. The weak thermal lensing associated with Nd:YLF has enabled to achieve good beam quality, M-2 <= 2.0, and stable operation in relatively long cavities. (C) 2012 Optical Society of America

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We studied free surface oscillations of a fluid in a cylinder tank excited by an electric motor with limited power supply. We investigated the possibility of parametric resonance in this system, showing that the excitation mechanism can generate chaotic response. Numerical experiments are carried out to present the existence of several types of regular and chaotic attractors. For the first time powers (power of the motor, power consumed by the damping force under fluid free surface oscillations, and a total power) are calculated, investigated, and shown for different regimes, regular and chaotic ones for parametric resonance interactions. [DOI: 10.1115/1.4005844]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work tries to display a comprehensive and comparative study of the different legal and regulatory problems involved in international securitization transactions. First, an introduction to securitization is provided, with the basic elements of the transaction, followed by the different varieties of it, including dynamic securitization and synthetic securitization structures. Together with this introduction to the intricacies of the structure, a insight into the influence of securitization in the financial and economic crisis of 2007-2009 is provided too; as well as an overview of the process of regulatory competition and cooperation that constitutes the framework for the international aspects of securitization. The next Chapter focuses on the aspects that constitute the foundations of structured finance: the inception of the vehicle, and the transfer of risks associated to the securitized assets, with particular emphasis on the validity of those elements, and how a securitization transaction could be threatened at its root. In this sense, special importance is given to the validity of the trust as an instrument of finance, to the assignment of future receivables or receivables in block, and to the importance of formalities for the validity of corporations, trusts, assignments, etc., and the interaction of such formalities contained in general corporate, trust and assignment law with those contemplated under specific securitization regulations. Then, the next Chapter (III) focuses on creditor protection aspects. As such, we provide some insights on the debate on the capital structure of the firm, and its inadequacy to assess the financial soundness problems inherent to securitization. Then, we proceed to analyze the importance of rules on creditor protection in the context of securitization. The corollary is in the rules in case of insolvency. In this sense, we divide the cases where a party involved in the transaction goes bankrupt, from those where the transaction itself collapses. Finally, we focus on the scenario where a substance over form analysis may compromise some of the elements of the structure (notably the limited liability of the sponsor, and/or the transfer of assets) by means of veil piercing, substantive consolidation, or recharacterization theories. Once these elements have been covered, the next Chapters focus on the regulatory aspects involved in the transaction. Chapter IV is more referred to “market” regulations, i.e. those concerned with information disclosure and other rules (appointment of the indenture trustee, and elaboration of a rating by a rating agency) concerning the offering of asset-backed securities to the public. Chapter V, on the other hand, focuses on “prudential” regulation of the entity entrusted with securitizing assets (the so-called Special Purpose vehicle), and other entities involved in the process. Regarding the SPV, a reference is made to licensing requirements, restriction of activities and governance structures to prevent abuses. Regarding the sponsor of the transaction, a focus is made on provisions on sound originating practices, and the servicing function. Finally, we study accounting and banking regulations, including the Basel I and Basel II Frameworks, which determine the consolidation of the SPV, and the de-recognition of the securitized asset from the originating company’s balance-sheet, as well as the posterior treatment of those assets, in particular by banks. Chapters VI-IX are concerned with liability matters. Chapter VI is an introduction to the different sources of liability. Chapter VII focuses on the liability by the SPV and its management for the information supplied to investors, the management of the asset pool, and the breach of loyalty (or fiduciary) duties. Chapter VIII rather refers to the liability of the originator as a result of such information and statements, but also as a result of inadequate and reckless originating or servicing practices. Chapter IX finally focuses on third parties entrusted with the soundness of the transaction towards the market, the so-called gatekeepers. In this respect, we make special emphasis on the liability of indenture trustees, underwriters and rating agencies. Chapters X and XI focus on the international aspects of securitization. Chapter X contains a conflicts of laws analysis of the different aspects of structured finance. In this respect, a study is made of the laws applicable to the vehicle, to the transfer of risks (either by assignment or by means of derivatives contracts), to liability issues; and a study is also made of the competent jurisdiction (and applicable law) in bankruptcy cases; as well as in cases where a substance-over-form is performed. Then, special attention is also devoted to the role of financial and securities regulations; as well as to their territorial limits, and extraterritoriality problems involved. Chapter XI supplements the prior Chapter, for it analyzes the limits to the States’ exercise of regulatory power by the personal and “market” freedoms included in the US Constitution or the EU Treaties. A reference is also made to the (still insufficient) rules from the WTO Framework, and their significance to the States’ recognition and regulation of securitization transactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main part of this thesis describes a method of calculating the massless two-loop two-point function which allows expanding the integral up to an arbitrary order in the dimensional regularization parameter epsilon by rewriting it as a double Mellin-Barnes integral. Closing the contour and collecting the residues then transforms this integral into a form that enables us to utilize S. Weinzierl's computer library nestedsums. We could show that multiple zeta values and rational numbers are sufficient for expanding the massless two-loop two-point function to all orders in epsilon. We then use the Hopf algebra of Feynman diagrams and its antipode, to investigate the appearance of Riemann's zeta function in counterterms of Feynman diagrams in massless Yukawa theory and massless QED. The class of Feynman diagrams we consider consists of graphs built from primitive one-loop diagrams and the non-planar vertex correction, where the vertex corrections only depend on one external momentum. We showed the absence of powers of pi in the counterterms of the non-planar vertex correction and diagrams built by shuffling it with the one-loop vertex correction. We also found the invariance of some coefficients of zeta functions under a change of momentum flow through these vertex corrections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In dieser Arbeit werden Quantum-Hydrodynamische (QHD) Modelle betrachtet, die ihren Einsatz besonders in der Modellierung von Halbleiterbauteilen finden. Das QHD Modell besteht aus den Erhaltungsgleichungen für die Teilchendichte, das Momentum und die Energiedichte, inklusive der Quanten-Korrekturen durch das Bohmsche Potential. Zu Beginn wird eine Übersicht über die bekannten Ergebnisse der QHD Modelle unter Vernachlässigung von Kollisionseffekten gegeben, die aus ein­em Schrödinger-System für den gemischten-Zustand oder aus der Wigner-Glei­chung hergeleitet werden können. Nach der Reformulierung der eindimensionalen QHD Gleichungen mit linearem Potential als stationäre Schrö­din­ger-Gleichung werden die semianalytischen Fassungen der QHD Gleichungen für die Gleichspannungs-Kurve betrachtet. Weiterhin werden die viskosen Stabilisierungen des QHD Modells be­rück­sich­tigt, sowie die von Gardner vorgeschlagene numerische Viskosität für das {sf upwind} Finite-Differenzen Schema berechnet. Im Weiteren wird das viskose QHD Modell aus der Wigner-Glei­chung mit Fokker-Planck Kollisions-Ope­ra­tor hergeleitet. Dieses Modell enthält die physikalische Viskosität, die durch den Kollision-Operator eingeführt wird. Die Existenz der Lösungen (mit strikt positiver Teilchendichte) für das isotherme, stationäre, eindimensionale, viskose Modell für allgemeine Daten und nichthomogene Randbedingungen wird gezeigt. Die dafür notwendigen Abschätzungen hängen von der Viskosität ab und erlauben daher den Grenzübergang zum nicht-viskosen Fall nicht. Numerische Simulationen der Resonanz-Tunneldiode modelliert mit dem nichtisothermen, stationären, eindimensionalen, viskosen QHD Modell zeigen den Einfluss der Viskosität auf die Lösung. Unter Verwendung des von Degond und Ringhofer entwickelten Quanten-Entropie-Minimierungs-Verfahren werden die allgemeinen QHD-Gleichungen aus der Wigner-Boltzmann-Gleichung mit dem BGK-Kollisions-Operator hergeleitet. Die Herleitung basiert auf der vorsichtige Entwicklung des Quanten-Max­well­ians in Potenzen der skalierten Plankschen Konstante. Das so erhaltene Modell enthält auch vertex-Terme und dispersive Terme für die Ge­schwin­dig­keit. Dadurch bleibt die Gleichspannungs-Kurve für die Re­so­nanz-Tunnel­diode unter Verwendung des allgemeinen QHD Modells in einer Dimension numerisch erhalten. Die Ergebnisse zeigen, dass der dispersive Ge­schwin­dig­keits-Term die Lösung des Systems stabilisiert.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis deals with the concept of presumptions, and in particular of legal presumptions, in the context of national tax systems (Italy and Belgium) and EU law. The purpose was to investigate the concept of legal presumption under a twofold comparative perspective. After having provided a general overview of the common core concept of presumption in the European context, an insight in the national approach to legal presumptions was given by examining two different national experiences, namely the Italian and Belgian tax systems. At this stage, the Constitutional framework and some of the most interesting and relevant at EU level presumptive measures were explored, with a view to underlining possible divergences and common grounds. The concept of (national) legal presumption was then investigated in the context of EU law, with the attempt to systematize under a uniform perspective a matter which has been traditionally dealt with either from the merely national point of view or, at EU level, through a fragmented form. In this instance, the EU law relevant framework and the most significant EUCJ case-law, in particular in the field of customs duties, VAT, on the issue of the repayment of taxes levied in breach of EU law and in the area of direct taxation, were examined so as to construe the overall EU approach to national legal presumptions. This was done with the finality of determining if and to what extent a common analytical framework may be identified, from which were extracted certain criteria governing the compatibility of national legal presumptions with EU law.