960 resultados para Misspecification, Sign restrictions, Shock identification, Model validation.
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
A numerical study is presented of the third-dimensional Gaussian random-field Ising model at T=0 driven by an external field. Standard synchronous relaxation dynamics is employed to obtain the magnetization versus field hysteresis loops. The focus is on the analysis of the number and size distribution of the magnetization avalanches. They are classified as being nonspanning, one-dimensional-spanning, two-dimensional-spanning, or three-dimensional-spanning depending on whether or not they span the whole lattice in different space directions. Moreover, finite-size scaling analysis enables identification of two different types of nonspanning avalanches (critical and noncritical) and two different types of three-dimensional-spanning avalanches (critical and subcritical), whose numbers increase with L as a power law with different exponents. We conclude by giving a scenario for avalanche behavior in the thermodynamic limit.
Resumo:
The mean-field theory of a spin glass with a specific form of nearest- and next-nearest-neighbor interactions is investigated. Depending on the sign of the interaction matrix chosen we find either the continuous replica symmetry breaking seen in the Sherrington-Kirkpartick model or a one-step solution similar to that found in structural glasses. Our results are confirmed by numerical simulations and the link between the type of spin-glass behavior and the density of eigenvalues of the interaction matrix is discussed.
Resumo:
Motivation for Speaker recognition work is presented in the first part of the thesis. An exhaustive survey of past work in this field is also presented. A low cost system not including complex computation has been chosen for implementation. Towards achieving this a PC based system is designed and developed. A front end analog to digital convertor (12 bit) is built and interfaced to a PC. Software to control the ADC and to perform various analytical functions including feature vector evaluation is developed. It is shown that a fixed set of phrases incorporating evenly balanced phonemes is aptly suited for the speaker recognition work at hand. A set of phrases are chosen for recognition. Two new methods are adopted for the feature evaluation. Some new measurements involving a symmetry check method for pitch period detection and ACE‘ are used as featured. Arguments are provided to show the need for a new model for speech production. Starting from heuristic, a knowledge based (KB) speech production model is presented. In this model, a KB provides impulses to a voice producing mechanism and constant correction is applied via a feedback path. It is this correction that differs from speaker to speaker. Methods of defining measurable parameters for use as features are described. Algorithms for speaker recognition are developed and implemented. Two methods are presented. The first is based on the model postulated. Here the entropy on the utterance of a phoneme is evaluated. The transitions of voiced regions are used as speaker dependent features. The second method presented uses features found in other works, but evaluated differently. A knock—out scheme is used to provide the weightage values for the selection of features. Results of implementation are presented which show on an average of 80% recognition. It is also shown that if there are long gaps between sessions, the performance deteriorates and is speaker dependent. Cross recognition percentages are also presented and this in the worst case rises to 30% while the best case is 0%. Suggestions for further work are given in the concluding chapter.
Resumo:
In the current study, epidemiology study is done by means of literature survey in groups identified to be at higher potential for DDIs as well as in other cases to explore patterns of DDIs and the factors affecting them. The structure of the FDA Adverse Event Reporting System (FAERS) database is studied and analyzed in detail to identify issues and challenges in data mining the drug-drug interactions. The necessary pre-processing algorithms are developed based on the analysis and the Apriori algorithm is modified to suit the process. Finally, the modules are integrated into a tool to identify DDIs. The results are compared using standard drug interaction database for validation. 31% of the associations obtained were identified to be new and the match with existing interactions was 69%. This match clearly indicates the validity of the methodology and its applicability to similar databases. Formulation of the results using the generic names expanded the relevance of the results to a global scale. The global applicability helps the health care professionals worldwide to observe caution during various stages of drug administration thus considerably enhancing pharmacovigilance
Resumo:
Automobile Industry in India is influenced by the presence of national and multi-national manufacturers. The presence of many manufacturers and brands in the state provides many choices to the customer. The current market for car manufacturers has been transformed from a monopoly of one or two manufacturers in the seventies to oligopoly of many manufacturers in the current marketing scenario. The main objective of the research paper is to explore and conceptualize various parameters and develop a model, which influence the purchase patterns of passenger cars in the State of Kerala. Thus, the main purpose of this paper is to come up with a model, which shall facilitate further study on the consumer purchase behaviour patterns of passenger car owners in the State of Kerala, India. The author intends to undertake further quantitative analysis to verify and validate the model so developed. The main methods used for this paper are secondary research on available material, depth interview of car dealers, car financing agencies and car owners in the city of Cochin, in Kerala State in India. The depth interviews were conducted with the use of prepared questionnaire for car dealers, car customers and car financing agencies. The findings resulted in the identification of the parameters that influence the consumer purchase behaviour of passenger cars and the formulation of the model, which will be the basis for the further research of the author. The paper will be of tremendous value to the existing and new car manufacturers both indigenous and foreign, to formalize and strategies their policies towards an effective marketing strategy, so as to market their models in the State, which is known for its high literacy, consumerism and higher educational penetration
Resumo:
Auf dem Gebiet der Strukturdynamik sind computergestützte Modellvalidierungstechniken inzwischen weit verbreitet. Dabei werden experimentelle Modaldaten, um ein numerisches Modell für weitere Analysen zu korrigieren. Gleichwohl repräsentiert das validierte Modell nur das dynamische Verhalten der getesteten Struktur. In der Realität gibt es wiederum viele Faktoren, die zwangsläufig zu variierenden Ergebnissen von Modaltests führen werden: Sich verändernde Umgebungsbedingungen während eines Tests, leicht unterschiedliche Testaufbauten, ein Test an einer nominell gleichen aber anderen Struktur (z.B. aus der Serienfertigung), etc. Damit eine stochastische Simulation durchgeführt werden kann, muss eine Reihe von Annahmen für die verwendeten Zufallsvariablengetroffen werden. Folglich bedarf es einer inversen Methode, die es ermöglicht ein stochastisches Modell aus experimentellen Modaldaten zu identifizieren. Die Arbeit beschreibt die Entwicklung eines parameter-basierten Ansatzes, um stochastische Simulationsmodelle auf dem Gebiet der Strukturdynamik zu identifizieren. Die entwickelte Methode beruht auf Sensitivitäten erster Ordnung, mit denen Parametermittelwerte und Kovarianzen des numerischen Modells aus stochastischen experimentellen Modaldaten bestimmt werden können.
Resumo:
Array technologies have made it possible to record simultaneously the expression pattern of thousands of genes. A fundamental problem in the analysis of gene expression data is the identification of highly relevant genes that either discriminate between phenotypic labels or are important with respect to the cellular process studied in the experiment: for example cell cycle or heat shock in yeast experiments, chemical or genetic perturbations of mammalian cell lines, and genes involved in class discovery for human tumors. In this paper we focus on the task of unsupervised gene selection. The problem of selecting a small subset of genes is particularly challenging as the datasets involved are typically characterized by a very small sample size ?? the order of few tens of tissue samples ??d by a very large feature space as the number of genes tend to be in the high thousands. We propose a model independent approach which scores candidate gene selections using spectral properties of the candidate affinity matrix. The algorithm is very straightforward to implement yet contains a number of remarkable properties which guarantee consistent sparse selections. To illustrate the value of our approach we applied our algorithm on five different datasets. The first consists of time course data from four well studied Hematopoietic cell lines (HL-60, Jurkat, NB4, and U937). The other four datasets include three well studied treatment outcomes (large cell lymphoma, childhood medulloblastomas, breast tumors) and one unpublished dataset (lymph status). We compared our approach both with other unsupervised methods (SOM,PCA,GS) and with supervised methods (SNR,RMB,RFE). The results clearly show that our approach considerably outperforms all the other unsupervised approaches in our study, is competitive with supervised methods and in some case even outperforms supervised approaches.
Resumo:
Caches are known to consume up to half of all system power in embedded processors. Co-optimizing performance and power of the cache subsystems is therefore an important step in the design of embedded systems, especially those employing application specific instruction processors. In this project, we propose an analytical cache model that succinctly captures the miss performance of an application over the entire cache parameter space. Unlike exhaustive trace driven simulation, our model requires that the program be simulated once so that a few key characteristics can be obtained. Using these application-dependent characteristics, the model can span the entire cache parameter space consisting of cache sizes, associativity and cache block sizes. In our unified model, we are able to cater for direct-mapped, set and fully associative instruction, data and unified caches. Validation against full trace-driven simulations shows that our model has a high degree of fidelity. Finally, we show how the model can be coupled with a power model for caches such that one can very quickly decide on pareto-optimal performance-power design points for rapid design space exploration.
Resumo:
Este estudio fue conducido para evaluar la correlación entre lactato arterial y venoso central en niños con sepsis y choque séptico de una unidad de cuidado intensivo pediátrico. Se incluyeron 42 pacientes con edades comprendidas entre 1 mes y 17 años 364 días con diagnóstico de sepsis y choque séptico que ingresaron a la Unidad de Cuidado Intensivo en un hospital universitario de referencia. Se registró el valor del lactato obtenido de una muestra de sangre arterial y de sangre venosa central tomadas simultáneamente y dentro de las primeras 24 horas del ingreso a la unidad. Por medio de la prueba de Rho de Spearman se encontró una correlación de 0,872 (p<0,001) y se ajustó al uso de medicamentos, vasoactivos, edad y peso (modelo de regresión no paramétrico quantílico), manteniéndose una correlación fuerte y significativa.
Resumo:
Aunque el concepto de sabiduría ha sido ampliamente estudiado por expertos de áreas como la filosofía, la religión y la psicología, aún enfrenta limitaciones en cuanto a su definición y evaluación. Por esto, el presente trabajo tiene como objetivo, formular una definición del concepto de sabiduría que permita realizar una propuesta de evaluación del concepto como competencia en los gerentes. Para esto, se realizó un análisis documental de tipo cualitativo. De esta manera, se analizaron diversos textos sobre la historia, las definiciones y las metodologías para evaluar tanto la sabiduría como las competencias; diferenciando la sabiduría de otros constructos y analizando la diferencia entre las competencias generales y las gerenciales para posteriormente, definir la sabiduría como una competencia gerencial. Como resultado de este análisis se generó un prototipo de prueba denominado SAPIENS-O, a través del cuál se busca evaluar la sabiduría como competencia gerencial. Como alcances del instrumento se pueden identificar la posibilidad de medir la sabiduría como competencia en los gerentes, la posibilidad de dar un nuevo panorama a las dificultades teóricas y empíricas sobre la sabiduría y la posibilidad de facilitar el estudio de la sabiduría en ambientes reales, más específicamente en ambientes organizacionales.
Resumo:
Background: Plasmodium vivax malaria remains a major health problem in tropical and sub-tropical regions worldwide. Several rhoptry proteins which are important for interaction with and/or invasion of red blood cells, such as PfRONs, Pf92, Pf38, Pf12 and Pf34, have been described during the last few years and are being considered as potential anti-malarial vaccine candidates. This study describes the identification and characterization of the P. vivax rhoptry neck protein 1 (PvRON1) and examine its antigenicity in natural P. vivax infections. Methods: The PvRON1 encoding gene, which is homologous to that encoding the P. falciparum apical sushi protein (ASP) according to the plasmoDB database, was selected as our study target. The pvron1 gene transcription was evaluated by RT-PCR using RNA obtained from the P. vivax VCG-1 strain. Two peptides derived from the deduced P. vivax Sal-I PvRON1 sequence were synthesized and inoculated in rabbits for obtaining anti-PvRON1 antibodies which were used to confirm the protein expression in VCG-1 strain schizonts along with its association with detergent-resistant microdomains (DRMs) by Western blot, and its localization by immunofluorescence assays. The antigenicity of the PvRON1 protein was assessed using human sera from individuals previously exposed to P. vivax malaria by ELISA. Results: In the P. vivax VCG-1 strain, RON1 is a 764 amino acid-long protein. In silico analysis has revealed that PvRON1 shares essential characteristics with different antigens involved in invasion, such as the presence of a secretory signal, a GPI-anchor sequence and a putative sushi domain. The PvRON1 protein is expressed in parasite's schizont stage, localized in rhoptry necks and it is associated with DRMs. Recombinant protein recognition by human sera indicates that this antigen can trigger an immune response during a natural infection with P. vivax. Conclusions: This study shows the identification and characterization of the P. vivax rhoptry neck protein 1 in the VCG-1 strain. Taking into account that PvRON1 shares several important characteristics with other Plasmodium antigens that play a functional role during RBC invasion and, as shown here, it is antigenic, it could be considered as a good vaccine candidate. Further studies aimed at assessing its immunogenicity and protection-inducing ability in the Aotus monkey model are thus recommended.
Resumo:
El presente Estudio de Caso tiene como objetivo analizar en qué medida las dinámicas comerciales de la Diplomacia Petrolera China han convertido a Ecuador en un socio estratégico para la RPCh. El petróleo como fuente de energía es primordial para llevar a cabo los procesos de industrialización y mantener el crecimiento económico del león Asiático. Por eso su búsqueda se ha convertido en un tema principal dentro de la agenda de política exterior. Ecuador, el tercer país de Suramérica con más reservas de petróleo, después de Venezuela y Brasil, se ha convertido en zona de influencia de la RPCh y a través de las empresas petroleras estatales se han firmado contratos por la venta de petróleo. A pesar de que las relaciones bilaterales son asimétricas, se buscar establecer si Ecuador es un socio estratégico en la región.
Resumo:
In this chapter, an asymmetric DSGE model is built in order to account for asymmetries in business cycles. One of the most important contributions of this work is the construction of a general utility function which nests loss aversion, risk aversion and habits formation by means of a smooth transition function. The main idea behind this asymmetric utility function is that under recession the agents over-smooth consumption and leisure choices in order to prevent a huge deviation of them from the reference level of the utility; while under boom, the agents simply smooth consumption and leisure, but trying to be as far as possible from the reference level of utility. The simulations of this model by means of Perturbations Method show that it is possible to reproduce asymmetrical business cycles where recession (on shock) are stronger than booms and booms are more long-lasting than recession. One additional and unexpected result is a downward stickiness displayed by real wages. As a consequence of this, there is a more persistent fall in employment in recession than in boom. Thus, the model reproduces not only asymmetrical business cycles but also real stickiness and hysteresis.
Resumo:
We propose and estimate a financial distress model that explicitly accounts for the interactions or spill-over effects between financial institutions, through the use of a spatial continuity matrix that is build from financial network data of inter bank transactions. Such setup of the financial distress model allows for the empirical validation of the importance of network externalities in determining financial distress, in addition to institution specific and macroeconomic covariates. The relevance of such specification is that it incorporates simultaneously micro-prudential factors (Basel 2) as well as macro-prudential and systemic factors (Basel 3) as determinants of financial distress. Results indicate network externalities are an important determinant of financial health of a financial institutions. The parameter that measures the effect of network externalities is both economically and statistical significant and its inclusion as a risk factor reduces the importance of the firm specific variables such as the size or degree of leverage of the financial institution. In addition we analyze the policy implications of the network factor model for capital requirements and deposit insurance pricing.