983 resultados para unified framework
Resumo:
From H. G. Johnson's work (Review of Economic Studies, 1953–54) on tariff retaliation, the questions of whether a country can win a “tariff war” and how or even the broader question of what will affect a country's strategic position in setting bilateral tariff have been tackled in various situations. Although it is widely accepted that a country will have strategic advantages in winning the tariff war if its relative monopoly power is sufficiently large, it is unclear what are the forces behind such power formation. The goal of this research is to provide a unified framework and discuss various forces such as relative country size, absolute advantages and relative advantages simultaneously. In a two-country continuum-of-commodity neoclassical trade model, it is shown that sufficiently large relative country size is a sufficient condition for a country to choose a non-cooperative tariff Nash equilibrium over free trade. It is also shown that technology disparities such as absolute advantage, rate of technology disparity and the distribution of the technology disparity all contribute to a country's strategic position and interact with country size. ^ Leverage effect is usually used to explain the phenomenon of asymmetric volatility in equity returns. However, leverage itself can only account for parts of the asymmetry. In this research, it is shown that stock return volatility is related to firms’ financial status. Financially constrained firms tend to be more sensitive to the return changes. Financial constraint factor explains why some firms tend to be more volatile than others. I found that the financial constraint factor explains the stock return volatility independent of other factors such as firm size, industry affiliation and leverage. Firms’ industry affiliations are shown to be very weak in differentiating volatility. Firm size is proven to be a good factor in distinguishing the different levels of volatility and volatility-return sensitivity. Leverage hypothesis is also partly corroborated and the situation where leverage effect is not applicable is discussed. Finally, I examined the macroeconomic policy's effects on overall market volatility. ^
Resumo:
This proposal is a non-quantitative study based on a corpus of real data which offers a principled account of the translation strategies employed in the translation of English film titles into Spanish in terms of cognitive modeling. More specifically, we draw on Ruiz de Mendoza and Galera’s (2014) work on what they term content (or low-level) cognitive operations, based on either ‘stands for’ or ‘identity’ relations, in order to investigate possible motivating factors for translations which abide by oblique procedures, i.e. for non-literal renderings of source titles. The present proposal is made in consonance with recent findings within the framework of Cognitive Linguistics (Samaniego 2007), which evidence that this linguistic approach can fruitfully address some relevant issues in Translation Studies, the most outstanding for our purposes being the exploration of the cognitive operations which account for the use of translation strategies (Rojo and Ibarretxe-Antuñano 2013: 10), mainly expansion and reduction operations, parameterization, echoing, mitigation and comparison by contrast. This fits in nicely with a descriptive approach to translation and particularly with skopos theory, whose main aim consists in achieving functionally adequate renderings of source texts.
Resumo:
The Theoretical and Experimental Tomography in the Sea Experiment (THETIS 1) took place in the Gulf of Lion to observe the evolution of the temperature field and the process of deep convection during the 1991-1992 winter. The temperature measurements consist, of moored sensors, conductivity-temperature-depth and expendable bathythermograph surveys, ana acoustic tomography. Because of this diverse data set and since the field evolves rather fast, the analysis uses a unified framework, based on estimation theory and implementing a Kalman filter. The resolution and the errors associated with the model are systematically estimated. Temperature is a good tracer of water masses. The time-evolving three-dimensional view of the field resulting from the analysis shows the details of the three classical convection phases: preconditioning, vigourous convection, and relaxation. In all phases, there is strong spatial nonuniformity, with mesoscale activity, short timescales, and sporadic evidence of advective events (surface capping, intrusions of Levantine Intermediate Water (LIW)). Deep convection, reaching 1500 m, was observed in late February; by late April the field had not yet returned to its initial conditions (strong deficit of LIW). Comparison with available atmospheric flux data shows that advection acts to delay the occurence of convection and confirms the essential role of buoyancy fluxes. For this winter, the deep. mixing results in an injection of anomalously warm water (Delta T similar or equal to 0.03 degrees) to a depth of 1500 m, compatible with the deep warming previously reported.
Resumo:
The overarching theme of this thesis is mesoscale optical and optoelectronic design of photovoltaic and photoelectrochemical devices. In a photovoltaic device, light absorption and charge carrier transport are coupled together on the mesoscale, and in a photoelectrochemical device, light absorption, charge carrier transport, catalysis, and solution species transport are all coupled together on the mesoscale. The work discussed herein demonstrates that simulation-based mesoscale optical and optoelectronic modeling can lead to detailed understanding of the operation and performance of these complex mesostructured devices, serve as a powerful tool for device optimization, and efficiently guide device design and experimental fabrication efforts. In-depth studies of two mesoscale wire-based device designs illustrate these principles—(i) an optoelectronic study of a tandem Si|WO3 microwire photoelectrochemical device, and (ii) an optical study of III-V nanowire arrays.
The study of the monolithic, tandem, Si|WO3 microwire photoelectrochemical device begins with development and validation of an optoelectronic model with experiment. This study capitalizes on synergy between experiment and simulation to demonstrate the model’s predictive power for extractable device voltage and light-limited current density. The developed model is then used to understand the limiting factors of the device and optimize its optoelectronic performance. The results of this work reveal that high fidelity modeling can facilitate unequivocal identification of limiting phenomena, such as parasitic absorption via excitation of a surface plasmon-polariton mode, and quick design optimization, achieving over a 300% enhancement in optoelectronic performance over a nominal design for this device architecture, which would be time-consuming and challenging to do via experiment.
The work on III-V nanowire arrays also starts as a collaboration of experiment and simulation aimed at gaining understanding of unprecedented, experimentally observed absorption enhancements in sparse arrays of vertically-oriented GaAs nanowires. To explain this resonant absorption in periodic arrays of high index semiconductor nanowires, a unified framework that combines a leaky waveguide theory perspective and that of photonic crystals supporting Bloch modes is developed in the context of silicon, using both analytic theory and electromagnetic simulations. This detailed theoretical understanding is then applied to a simulation-based optimization of light absorption in sparse arrays of GaAs nanowires. Near-unity absorption in sparse, 5% fill fraction arrays is demonstrated via tapering of nanowires and multiple wire radii in a single array. Finally, experimental efforts are presented towards fabrication of the optimized array geometries. A hybrid self-catalyzed and selective area MOCVD growth method is used to establish morphology control of GaP nanowire arrays. Similarly, morphology and pattern control of nanowires is demonstrated with ICP-RIE of InP. Optical characterization of the InP nanowire arrays gives proof of principle that tapering and multiple wire radii can lead to near-unity absorption in sparse arrays of InP nanowires.
Resumo:
Process systems design, operation and synthesis problems under uncertainty can readily be formulated as two-stage stochastic mixed-integer linear and nonlinear (nonconvex) programming (MILP and MINLP) problems. These problems, with a scenario based formulation, lead to large-scale MILPs/MINLPs that are well structured. The first part of the thesis proposes a new finitely convergent cross decomposition method (CD), where Benders decomposition (BD) and Dantzig-Wolfe decomposition (DWD) are combined in a unified framework to improve the solution of scenario based two-stage stochastic MILPs. This method alternates between DWD iterations and BD iterations, where DWD restricted master problems and BD primal problems yield a sequence of upper bounds, and BD relaxed master problems yield a sequence of lower bounds. A variant of CD, which includes multiple columns per iteration of DW restricted master problem and multiple cuts per iteration of BD relaxed master problem, called multicolumn-multicut CD is then developed to improve solution time. Finally, an extended cross decomposition method (ECD) for solving two-stage stochastic programs with risk constraints is proposed. In this approach, a CD approach at the first level and DWD at a second level is used to solve the original problem to optimality. ECD has a computational advantage over a bilevel decomposition strategy or solving the monolith problem using an MILP solver. The second part of the thesis develops a joint decomposition approach combining Lagrangian decomposition (LD) and generalized Benders decomposition (GBD), to efficiently solve stochastic mixed-integer nonlinear nonconvex programming problems to global optimality, without the need for explicit branch and bound search. In this approach, LD subproblems and GBD subproblems are systematically solved in a single framework. The relaxed master problem obtained from the reformulation of the original problem, is solved only when necessary. A convexification of the relaxed master problem and a domain reduction procedure are integrated into the decomposition framework to improve solution efficiency. Using case studies taken from renewable resource and fossil-fuel based application in process systems engineering, it can be seen that these novel decomposition approaches have significant benefit over classical decomposition methods and state-of-the-art MILP/MINLP global optimization solvers.
Resumo:
In this thesis, we deal with the design of experiments in the drug development process, focusing on the design of clinical trials for treatment comparisons (Part I) and the design of preclinical laboratory experiments for proteins development and manufacturing (Part II). In Part I we propose a multi-purpose design methodology for sequential clinical trials. We derived optimal allocations of patients to treatments for testing the efficacy of several experimental groups by also taking into account ethical considerations. We first consider exponential responses for survival trials and we then present a unified framework for heteroscedastic experimental groups that encompasses the general ANOVA set-up. The very good performance of the suggested optimal allocations, in terms of both inferential and ethical characteristics, are illustrated analytically and through several numerical examples, also performing comparisons with other designs proposed in the literature. Part II concerns the planning of experiments for processes composed of multiple steps in the context of preclinical drug development and manufacturing. Following the Quality by Design paradigm, the objective of the multi-step design strategy is the definition of the manufacturing design space of the whole process and, as we consider the interactions among the subsequent steps, our proposal ensures the quality and the safety of the final product, by enabling more flexibility and process robustness in the manufacturing.
Resumo:
Modeling the evolution of the state of program memory during program execution is critical to many parallehzation techniques. Current memory analysis techniques either provide very accurate information but run prohibitively slowly or produce very conservative results. An approach based on abstract interpretation is presented for analyzing programs at compile time, which can accurately determine many important program properties such as aliasing, logical data structures and shape. These properties are known to be critical for transforming a single threaded program into a versión that can be run on múltiple execution units in parallel. The analysis is shown to be of polynomial complexity in the size of the memory heap. Experimental results for benchmarks in the Jolden suite are given. These results show that in practice the analysis method is efflcient and is capable of accurately determining shape information in programs that créate and manipúlate complex data structures.
Resumo:
We present an approach for assessing the significance of sequence and structure comparisons by using nearly identical statistical formalisms for both sequence and structure. Doing so involves an all-vs.-all comparison of protein domains [taken here from the Structural Classification of Proteins (scop) database] and then fitting a simple distribution function to the observed scores. By using this distribution, we can attach a statistical significance to each comparison score in the form of a P value, the probability that a better score would occur by chance. As expected, we find that the scores for sequence matching follow an extreme-value distribution. The agreement, moreover, between the P values that we derive from this distribution and those reported by standard programs (e.g., blast and fasta validates our approach. Structure comparison scores also follow an extreme-value distribution when the statistics are expressed in terms of a structural alignment score (essentially the sum of reciprocated distances between aligned atoms minus gap penalties). We find that the traditional metric of structural similarity, the rms deviation in atom positions after fitting aligned atoms, follows a different distribution of scores and does not perform as well as the structural alignment score. Comparison of the sequence and structure statistics for pairs of proteins known to be related distantly shows that structural comparison is able to detect approximately twice as many distant relationships as sequence comparison at the same error rate. The comparison also indicates that there are very few pairs with significant similarity in terms of sequence but not structure whereas many pairs have significant similarity in terms of structure but not sequence.
Resumo:
Practitioners assess performance of entities in increasingly large and complicated datasets. If non-parametric models, such as Data Envelopment Analysis, were ever considered as simple push-button technologies, this is impossible when many variables are available or when data have to be compiled from several sources. This paper introduces by the 'COOPER-framework' a comprehensive model for carrying out non-parametric projects. The framework consists of six interrelated phases: Concepts and objectives, On structuring data, Operational models, Performance comparison model, Evaluation, and Result and deployment. Each of the phases describes some necessary steps a researcher should examine for a well defined and repeatable analysis. The COOPER-framework provides for the novice analyst guidance, structure and advice for a sound non-parametric analysis. The more experienced analyst benefits from a check list such that important issues are not forgotten. In addition, by the use of a standardized framework non-parametric assessments will be more reliable, more repeatable, more manageable, faster and less costly. © 2010 Elsevier B.V. All rights reserved.
A framework for transforming, analyzing, and realizing software designs in unified modeling language
Resumo:
Unified Modeling Language (UML) is the most comprehensive and widely accepted object-oriented modeling language due to its multi-paradigm modeling capabilities and easy to use graphical notations, with strong international organizational support and industrial production quality tool support. However, there is a lack of precise definition of the semantics of individual UML notations as well as the relationships among multiple UML models, which often introduces incomplete and inconsistent problems for software designs in UML, especially for complex systems. Furthermore, there is a lack of methodologies to ensure a correct implementation from a given UML design. The purpose of this investigation is to verify and validate software designs in UML, and to provide dependability assurance for the realization of a UML design.^ In my research, an approach is proposed to transform UML diagrams into a semantic domain, which is a formal component-based framework. The framework I proposed consists of components and interactions through message passing, which are modeled by two-layer algebraic high-level nets and transformation rules respectively. In the transformation approach, class diagrams, state machine diagrams and activity diagrams are transformed into component models, and transformation rules are extracted from interaction diagrams. By applying transformation rules to component models, a (sub)system model of one or more scenarios can be constructed. Various techniques such as model checking, Petri net analysis techniques can be adopted to check if UML designs are complete or consistent. A new component called property parser was developed and merged into the tool SAM Parser, which realize (sub)system models automatically. The property parser generates and weaves runtime monitoring code into system implementations automatically for dependability assurance. The framework in the investigation is creative and flexible since it not only can be explored to verify and validate UML designs, but also provides an approach to build models for various scenarios. As a result of my research, several kinds of previous ignored behavioral inconsistencies can be detected.^
Resumo:
Developing a unified classification system to replace four of the systems currently used in disability athletics (i.e., track and field) has been widely advocated. The diverse impairments to be included in a unified system require severed assessment methods, results of which cannot be meaningfully compared. Therefore, the taxonomic basis of current classification systems is invalid in a unified system. Biomechanical analysis establishes that force, a vector described in terms of magnitude and direction, is a key determinant of success in all athletic disciplines. It is posited that all impairments to be included in a unified system may be classified as either force magnitude impairments (FMI) or force control impairments (FCI). This framework would provide a valid taxonomic basis for a unified system, creating the opportunity to decrease the number of classes and enhance the viability of disability athletics.
Resumo:
Development of a unified classification system to replace four of the systems currently used in disability athletics (i.e., track and field) has been widely advocated. The definition and purpose of classification, underpinned by taxonomic principles and collectively endorsed by relevant disability sport organizations, have not been developed but are required for successful implementation of a unified system. It is posited that the International classification of functioning. disability, and health (ICF), published by the World Health Organization (2001), and current disability athletics systems are, fundamentally, classifications of the functioning and disability associated with health conditions and are highly interrelated. A rationale for basing a unified disability athletics system on ICF is established. Following taxonomic analysis of the current systems, the definition and purpose of a unified disability athletics classification are proposed and discussed. The proposed taxonomic framework and definitions have implications for other disability sport classification systems.
Resumo:
Theory building is one of the most crucial challenges faced by basic, clinical and population research, which form the scientific foundations of health practices in contemporary societies. The objective of the study is to propose a Unified Theory of Health-Disease as a conceptual tool for modeling health-disease-care in the light of complexity approaches. With this aim, the epistemological basis of theoretical work in the health field and concepts related to complexity theory as concerned to health problems are discussed. Secondly, the concepts of model-object, multi-planes of occurrence, modes of health and disease-illness-sickness complex are introduced and integrated into a unified theoretical framework. Finally, in the light of recent epistemological developments, the concept of Health-Disease-Care Integrals is updated as a complex reference object fit for modeling health-related processes and phenomena.
RadiaLE: A framework for designing and assessing link quality estimators in wireless sensor networks
Resumo:
Stringent cost and energy constraints impose the use of low-cost and low-power radio transceivers in large-scale wireless sensor networks (WSNs). This fact, together with the harsh characteristics of the physical environment, requires a rigorous WSN design. Mechanisms for WSN deployment and topology control, MAC and routing, resource and mobility management, greatly depend on reliable link quality estimators (LQEs). This paper describes the RadiaLE framework, which enables the experimental assessment, design and optimization of LQEs. RadiaLE comprises (i) the hardware components of the WSN testbed and (ii) a software tool for setting-up and controlling the experiments, automating link measurements gathering through packets-statistics collection, and analyzing the collected data, allowing for LQEs evaluation. We also propose a methodology that allows (i) to properly set different types of links and different types of traffic, (ii) to collect rich link measurements, and (iii) to validate LQEs using a holistic and unified approach. To demonstrate the validity and usefulness of RadiaLE, we present two case studies: the characterization of low-power links and a comparison between six representative LQEs. We also extend the second study for evaluating the accuracy of the TOSSIM 2 channel model.
Resumo:
This article presents a systematic framework for modeling several classes of illness-sickness-disease named as Holopathogenesis. Holopathogenesis is defined as processes of over-determination of diseases and related conditions taken as a whole, comprising selected facets of the complex object Health. First, a conceptual background of Holopathogenesis is presented as a series of significant interfaces (biomolecular-immunological, physiopathological-clinical, epidemiological-ecosocial). Second, propositions derived from Holopathogenesis are introduced in order to allow drawing the disease-illness-sickness complex as a hierarchical network of networks. Third, a formalization of intra- and inter-level correspondences, over-determination processes, effects and links of Holopathogenesis models is proposed. Finally, the Holopathogenesis frame is evaluated as a comprehensive theoretical pathology taken as a preliminary step towards a unified theory of health-disease.