916 resultados para automatic test case generation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The design of nuclear power plant has to follow a number of regulations aimed at limiting the risks inherent in this type of installation. The goal is to prevent and to limit the consequences of any possible incident that might threaten the public or the environment. To verify that the safety requirements are met a safety assessment process is followed. Safety analysis is as key component of a safety assessment, which incorporates both probabilistic and deterministic approaches. The deterministic approach attempts to ensure that the various situations, and in particular accidents, that are considered to be plausible, have been taken into account, and that the monitoring systems and engineered safety and safeguard systems will be capable of ensuring the safety goals. On the other hand, probabilistic safety analysis tries to demonstrate that the safety requirements are met for potential accidents both within and beyond the design basis, thus identifying vulnerabilities not necessarily accessible through deterministic safety analysis alone. Probabilistic safety assessment (PSA) methodology is widely used in the nuclear industry and is especially effective in comprehensive assessment of the measures needed to prevent accidents with small probability but severe consequences. Still, the trend towards a risk informed regulation (RIR) demanded a more extended use of risk assessment techniques with a significant need to further extend PSA’s scope and quality. Here is where the theory of stimulated dynamics (TSD) intervenes, as it is the mathematical foundation of the integrated safety assessment (ISA) methodology developed by the CSN(Consejo de Seguridad Nuclear) branch of Modelling and Simulation (MOSI). Such methodology attempts to extend classical PSA including accident dynamic analysis, an assessment of the damage associated to the transients and a computation of the damage frequency. The application of this ISA methodology requires a computational framework called SCAIS (Simulation Code System for Integrated Safety Assessment). SCAIS provides accident dynamic analysis support through simulation of nuclear accident sequences and operating procedures. Furthermore, it includes probabilistic quantification of fault trees and sequences; and integration and statistic treatment of risk metrics. SCAIS comprehensively implies an intensive use of code coupling techniques to join typical thermal hydraulic analysis, severe accident and probability calculation codes. The integration of accident simulation in the risk assessment process and thus requiring the use of complex nuclear plant models is what makes it so powerful, yet at the cost of an enormous increase in complexity. As the complexity of the process is primarily focused on such accident simulation codes, the question of whether it is possible to reduce the number of required simulation arises, which will be the focus of the present work. This document presents the work done on the investigation of more efficient techniques applied to the process of risk assessment inside the mentioned ISA methodology. Therefore such techniques will have the primary goal of decreasing the number of simulation needed for an adequate estimation of the damage probability. As the methodology and tools are relatively recent, there is not much work done inside this line of investigation, making it a quite difficult but necessary task, and because of time limitations the scope of the work had to be reduced. Therefore, some assumptions were made to work in simplified scenarios best suited for an initial approximation to the problem. The following section tries to explain in detail the process followed to design and test the developed techniques. Then, the next section introduces the general concepts and formulae of the TSD theory which are at the core of the risk assessment process. Afterwards a description of the simulation framework requirements and design is given. Followed by an introduction to the developed techniques, giving full detail of its mathematical background and its procedures. Later, the test case used is described and result from the application of the techniques is shown. Finally the conclusions are presented and future lines of work are exposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper assesses the main challenges associated with the propagation and channel modeling of broadband radio systems in a complex environment of high speed and metropolitan railways. These challenges comprise practical simulation, modeling interferences, radio planning, test trials and performance evaluation in different railway scenarios using Long Term Evolution (LTE) as test case. This approach requires several steps; the first is the use of a radio propagation simulator based on ray-tracing techniques to accurately predict propagation. Besides the radio propagation simulator, a complete test bed has been constructed to assess LTE performance, channel propagation conditions and interference with other systems in real-world environments by means of standard-compliant LTE transmissions. Such measurement results allowed us to evaluate the propagation and performance of broadband signals and to test the suitability of LTE radio technology for complex railway scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern sensor technologies and simulators applied to large and complex dynamic systems (such as road traffic networks, sets of river channels, etc.) produce large amounts of behavior data that are difficult for users to interpret and analyze. Software tools that generate presentations combining text and graphics can help users understand this data. In this paper we describe the results of our research on automatic multimedia presentation generation (including text, graphics, maps, images, etc.) for interactive exploration of behavior datasets. We designed a novel user interface that combines automatically generated text and graphical resources. We describe the general knowledge-based design of our presentation generation tool. We also present applications that we developed to validate the method, and a comparison with related work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La realidad aumentada educativa es una tecnología que actualmente está mejorando la calidad de enseñanza, la utilización de dispositivos móviles permite que el estudiante sea protagonista de su aprendizaje sin estar confinado a un espacio o tiempo específico para aprender. Aplicaciones colaborativas con realidad aumentada están siendo empleadas cada vez más en la educación, de tal forma que fomentan el trabajo en grupo donde los estudiantes comparten conocimiento, dudas, opiniones logrando un mejor nivel cognitivo que trabajando individualmente. En este trabajo se presenta el estado de la cuestión de Aplicaciones Educativas con Realidad Aumentada en dispositivos móviles, y Aplicaciones Educativas colaborativas con Realidad Aumentada, desarrolladas desde el 2002 e implementadas en instituciones educativas. Así mismo se realiza un estudio sobre la Realidad Aumentada, Realidad Aumentada móvil y Aprendizaje Móvil. Además, a partir de las características del estudio de las aplicaciones con Realidad Aumenta, se realiza un análisis y diseño de una Aplicación Móvil para el proyecto de inicio de los alumnos de nuevo ingreso de la UPM. Así como también una herramienta de autoría para las gestiones de las actividades propuestas por los docentes de la UPM. Finalmente se presenta un caso de prueba en el que se implementa parte de la propuesta de este trabajo, logrando construir un parte funcional para el proyecto inicial denominado PIANI – UPM. ---ABSTRACT---Educational Augmented reality is a technology that is improving the quality of teaching, use of mobile devices enables the student to be protagonists of their learning without being confined to a specific space or time to learn. Collaborative augmented reality applications applied in education are being used gradually encourage group work where students share knowledge, doubts, opinions so they achieve better cognitive level than working individually. In this paper the description of educational applications is presented with Augmented Reality using mobile devices, and collaborative educational Augmented Reality applications, developed since 2002 and implemented in educational institutions. Also a study on Augmented Reality, Mobile Augmented Reality and Mobile Learning is performed. Furthermore, from the study of the characteristics of Reality applications increases, an analysis and design of a mobile application for the proposed start of new students of UPM is performed. As well as an authoring tool for the efforts of the activities proposed by the teachers of the UPM. Finally a test case is presented in which part of the proposal of this work is implemented, obtaining building an initial prototype called PIANI - UPM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of mixed-criticality virtualized multicore systems poses new challenges that are being subject of active research work. There is an additional complexity: it is now required to identify a set of partitions, and allocate applications to partitions. In this job, a number of issues have to be considered, such as the criticality level of the application, security and dependability requirements, operating system used by the application, time requirements granularity, specific hardware needs, etc. MultiPARTES [6] toolset relies on Model Driven Engineering (MDE) [12], which is a suitable approach in this setting. In this paper, it is described the support provided for automatic system partitioning generation and toolset extensibility.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this work is to analyze a complex high lift configuration for which significant regions of separated flow are present. Current state of the art methods have some diffculty to predict the origin and the progression of this separated flow when increasing the angle of attack. The mechanisms responsible for the maximum lift limit on multi-element wing con?gurations are not clear; this stability analysis could help to understand the physics behind the phenomenon and to find a relation between the flow separation and the instability onset. The methodology presented herein consists in the computation of a steady base flow solution based on a finite volume discretization and a proposal of the solution for a generalized eigenvalue problem corresponding to the perturbed and linearized problem. The eigenvalue problem has been solved with the Arnoldi iterative method, one of the Krylov subspace projection methods. The described methodology was applied to the NACA0012 test case in subsonic and in transonic conditions and, finally, for the first time to the authors knowledge, on an industrial multi-component geometry, such as the A310 airfoil, in order to identify low frequency instabilities related to the separation. One important conclusion is that for all the analyzed geometries, one unstable mode related to flow separation appears for an angle of attack greater than the one correspondent to the maximum lift coe?cient condition. Finally, an adjoint study was carried out in order to evaluate the receptivity and the structural sensitivity of the geometries, giving an indication of the domain region that could be modified resulting in the biggest change of the flowfield.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los ataques a redes de información son cada vez más sofisticados y exigen una constante evolución y mejora de las técnicas de detección. Para ello, en este proyecto se ha diseñado e implementado una plataforma cooperativa para la detección de intrusiones basada en red. En primer lugar, se ha realizado un estudio teórico previo del marco tecnológico relacionado con este ámbito, en el que se describe y caracteriza el software que se utiliza para realizar ataques a sistemas (malware) así como los métodos que se utilizan para llegar a transmitir ese software (vectores de ataque). En el documento también se describen los llamados APT, que son ataques dirigidos con una gran inversión económica y temporal. Estos pueden englobar todos los malware y vectores de ataque existentes. Para poder evitar estos ataques, se estudiarán los sistemas de detección y prevención de intrusiones, describiendo brevemente los algoritmos que se tienden a utilizar en la actualidad. En segundo lugar, se ha planteado y desarrollado una plataforma en red dedicada al análisis de paquetes y conexiones para detectar posibles intrusiones. Este sistema está orientado a sistemas SCADA (Supervisory Control And Data Adquisition) aunque funciona sobre cualquier red IPv4/IPv6, para ello se definirá previamente lo que es un sistema SCADA, así como sus partes principales. Para implementar el sistema se han utilizado dispositivos de bajo consumo llamados Raspberry PI, estos se ubican entre la red y el equipo final que se quiera analizar. En ellos se ejecutan 2 aplicaciones desarrolladas de tipo cliente-servidor (la Raspberry central ejecutará la aplicación servidora y las esclavas la aplicación cliente) que funcionan de forma cooperativa utilizando la tecnología distribuida de Hadoop, la cual se explica previamente. Mediante esta tecnología se consigue desarrollar un sistema completamente escalable. La aplicación servidora muestra una interfaz gráfica que permite administrar la plataforma de análisis de forma centralizada, pudiendo ver así las alarmas de cada dispositivo y calificando cada paquete según su peligrosidad. El algoritmo desarrollado en la aplicación calcula el ratio de paquetes/tiempo que entran/salen del equipo final, procesando los paquetes y analizándolos teniendo en cuenta la información de señalización, creando diferentes bases de datos que irán mejorando la robustez del sistema, reduciendo así la posibilidad de ataques externos. Para concluir, el proyecto inicial incluía el procesamiento en la nube de la aplicación principal, pudiendo administrar así varias infraestructuras concurrentemente, aunque debido al trabajo extra necesario se ha dejado preparado el sistema para poder implementar esta funcionalidad. En el caso experimental actual el procesamiento de la aplicación servidora se realiza en la Raspberry principal, creando un sistema escalable, rápido y tolerante a fallos. ABSTRACT. The attacks to networks of information are increasingly sophisticated and demand a constant evolution and improvement of the technologies of detection. For this project it is developed and implemented a cooperative platform for detect intrusions based on networking. First, there has been a previous theoretical study of technological framework related to this area, which describes the software used for attacks on systems (malware) as well as the methods used in order to transmit this software (attack vectors). In this document it is described the APT, which are attacks directed with a big economic and time inversion. These can contain all existing malware and attack vectors. To prevent these attacks, intrusion detection systems and prevention intrusion systems will be discussed, describing previously the algorithms tend to use today. Secondly, a platform for analyzing network packets has been proposed and developed to detect possible intrusions in SCADA (Supervisory Control And Data Adquisition) systems. This platform is designed for SCADA systems (Supervisory Control And Data Acquisition) but works on any IPv4 / IPv6 network. Previously, it is defined what a SCADA system is and the main parts of it. To implement it, we used low-power devices called Raspberry PI, these are located between the network and the final device to analyze it. In these Raspberry run two applications client-server developed (the central Raspberry runs the server application and the slaves the client application) that work cooperatively using Hadoop distributed technology, which is previously explained. Using this technology is achieved develop a fully scalable system. The server application displays a graphical interface to manage analytics platform centrally, thereby we can see each device alarms and qualifying each packet by dangerousness. The algorithm developed in the application calculates the ratio of packets/time entering/leaving the terminal device, processing the packets and analyzing the signaling information of each packet, reating different databases that will improve the system, thereby reducing the possibility of external attacks. In conclusion, the initial project included cloud computing of the main application, being able to manage multiple concurrent infrastructure, but due to the extra work required has been made ready the system to implement this funcionality. In the current test case the server application processing is made on the main Raspberry, creating a scalable, fast and fault-tolerant system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work a p-adaptation (modification of the polynomial order) strategy based on the minimization of the truncation error is developed for high order discontinuous Galerkin methods. The truncation error is approximated by means of a truncation error estimation procedure and enables the identification of mesh regions that require adaptation. Three truncation error estimation approaches are developed and termed a posteriori, quasi-a priori and quasi-a priori corrected. Fine solutions, which are obtained by enriching the polynomial order, are required to solve the numerical problem with adequate accuracy. For the three truncation error estimation methods the former needs time converged solutions, while the last two rely on non-converged solutions, which lead to faster computations. Based on these truncation error estimation methods, algorithms for mesh adaptation were designed and tested. Firstly, an isotropic adaptation approach is presented, which leads to equally distributed polynomial orders in different coordinate directions. This first implementation is improved by incorporating a method to extrapolate the truncation error. This results in a significant reduction of computational cost. Secondly, the employed high order method permits the spatial decoupling of the estimated errors and enables anisotropic p-adaptation. The incorporation of anisotropic features leads to meshes with different polynomial orders in the different coordinate directions such that flow-features related to the geometry are resolved in a better manner. These adaptations result in a significant reduction of degrees of freedom and computational cost, while the amount of improvement depends on the test-case. Finally, this anisotropic approach is extended by using error extrapolation which leads to an even higher reduction in computational cost. These strategies are verified and compared in terms of accuracy and computational cost for the Euler and the compressible Navier-Stokes equations. The main result is that the two quasi-a priori methods achieve a significant reduction in computational cost when compared to a uniform polynomial enrichment. Namely, for a viscous boundary layer flow, we obtain a speedup of a factor of 6.6 and 7.6 for the quasi-a priori and quasi-a priori corrected approaches, respectively. RESUMEN En este trabajo se ha desarrollado una estrategia de adaptación-p (modificación del orden polinómico) para métodos Galerkin discontinuo de alto orden basada en la minimización del error de truncación. El error de truncación se estima utilizando el método tau-estimation. El estimador permite la identificación de zonas de la malla que requieren adaptación. Se distinguen tres técnicas de estimación: a posteriori, quasi a priori y quasi a priori con correción. Todas las estrategias requieren una solución obtenida en una malla fina, la cual es obtenida aumentando de manera uniforme el orden polinómico. Sin embargo, mientras que el primero requiere que esta solución esté convergida temporalmente, el resto utiliza soluciones no convergidas, lo que se traduce en un menor coste computacional. En este trabajo se han diseñado y probado algoritmos de adaptación de malla basados en métodos tau-estimation. En primer lugar, se presenta un algoritmo de adaptacin isótropo, que conduce a discretizaciones con el mismo orden polinómico en todas las direcciones espaciales. Esta primera implementación se mejora incluyendo un método para extrapolar el error de truncación. Esto resulta en una reducción significativa del coste computacional. En segundo lugar, el método de alto orden permite el desacoplamiento espacial de los errores estimados, permitiendo la adaptación anisotropica. Las mallas obtenidas mediante esta técnica tienen distintos órdenes polinómicos en cada una de las direcciones espaciales. La malla final tiene una distribución óptima de órdenes polinómicos, los cuales guardan relación con las características del flujo que, a su vez, depenen de la geometría. Estas técnicas de adaptación reducen de manera significativa los grados de libertad y el coste computacional. Por último, esta aproximación anisotropica se extiende usando extrapolación del error de truncación, lo que conlleva un coste computational aún menor. Las estrategias se verifican y se comparan en téminors de precisión y coste computacional utilizando las ecuaciones de Euler y Navier Stokes. Los dos métodos quasi a priori consiguen una reducción significativa del coste computacional en comparación con aumento uniforme del orden polinómico. En concreto, para una capa límite viscosa, obtenemos una mejora en tiempo de computación de 6.6 y 7.6 respectivamente, para las aproximaciones quasi-a priori y quasi-a priori con corrección.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabajo presenta un método discreto para el cálculo de estabilidad hidrodinámica y análisis de sensibilidad a perturbaciones externas para ecuaciones diferenciales y en particular para las ecuaciones de Navier-Stokes compressible. Se utiliza una aproximación con variable compleja para obtener una precisión analítica en la evaluación de la matriz Jacobiana. Además, mapas de sensibilidad para la sensibilidad a las modificaciones del flujo de base y a una fuerza constante permiten identificar las regiones del campo fluido donde una modificacin (ej. fuerza puntual) tiene un efecto estabilizador del flujo. Se presentan cuatro casos de prueba: (1) un caso analítico para comprobar la derivación discreta, (2) una cavidad cerrada a bajo Reynolds para mostrar la mayor precisión en el cálculo de los valores propios con la aproximación de paso complejo, (3) flujo 2D en un cilindro circular para validar la metodología, y (4) flujo en un cavidad abierta, presentado para validar el método en casos de inestabilidades convectivamente inestables. Los tres últimos casos mencionados (2-4) se resolvieron con las ecuaciones de Navier-Stokes compresibles, utilizando un método Discontinuous Galerkin Spectral Element Method. Se obtuvo una buena concordancia para el caso de validación (3), cuando se comparó el nuevo método con resultados de la literatura. Además, este trabajo muestra que para el cálculo de los modos propios directos y adjuntos, así como para los mapas de sensibilidad, el uso de variables complejas es de suprema importancia para obtener una predicción precisa. El método descrito es aplicado al análisis para la estabilización de la estela generada por un disco actuador, que representa un modelo sencillo para hélices, rotores de helicópteros o turbinas eólicas. Se explora la primera bifurcación del flujo para un disco actuador, y se sugiere que está asociada a una inestabilidad de tipo Kelvin-Helmholtz, cuya estabilidad se controla con en el número de Reynolds y en la resistencia del disco actuador (o fuerza resistente). En primer lugar, se verifica que la disminución de la resistencia del disco tiene un efecto estabilizador parecido a una disminución del Reynolds. En segundo lugar, el análisis hidrodinmico discreto identifica dos regiones para la colocación de una fuerza puntual que controle las inestabilidades, una cerca del disco y otra en una zona aguas abajo. En tercer lugar, se muestra que la inclusión de un forzamiento localizado cerca del actuador produce una estabilización más eficiente que al forzar aguas abajo. El análisis de los campos de flujo controlados confirma que modificando el gradiente de velocidad cerca del actuador es más eficiente para estabilizar la estela. Estos resultados podrían proporcionar nuevas directrices para la estabilización de la estela de turbinas de viento o de marea cuando estén instaladas en un parque eólico y minimizar las interacciones no estacionarias entre turbinas. ABSTRACT A discrete framework for computing the global stability and sensitivity analysis to external perturbations for any set of partial differential equations is presented. In particular, a complex-step approximation is used to achieve near analytical accuracy for the evaluation of the Jacobian matrix. Sensitivity maps for the sensitivity to base flow modifications and to a steady force are computed to identify regions of the flow field where an input could have a stabilising effect. Four test cases are presented: (1) an analytical test case to prove the theory of the discrete framework, (2) a lid-driven cavity at low Reynolds case to show the improved accuracy in the calculation of the eigenvalues when using the complex-step approximation, (3) the 2D flow past a circular cylinder at just below the critical Reynolds number is used to validate the methodology, and finally, (4) the flow past an open cavity is presented to give an example of the discrete method applied to a convectively unstable case. The latter three (2–4) of the aforementioned cases were solved with the 2D compressible Navier–Stokes equations using a Discontinuous Galerkin Spectral Element Method. Good agreement was obtained for the validation test case, (3), with appropriate results in the literature. Furthermore, it is shown that for the calculation of the direct and adjoint eigenmodes and their sensitivity maps to external perturbations, the use of complex variables is paramount for obtaining an accurate prediction. An analysis for stabilising the wake past an actuator disc, which represents a simple model for propellers, helicopter rotors or wind turbines is also presented. We explore the first flow bifurcation for an actuator disc and it suggests that it is associated to a Kelvin- Helmholtz type instability whose stability relies on the Reynolds number and the flow resistance applied through the disc (or actuator forcing). First, we report that decreasing the disc resistance has a similar stabilising effect to an decrease in the Reynolds number. Second, a discrete sensitivity analysis identifies two regions for suitable placement of flow control forcing, one close to the disc and one far downstream where the instability originates. Third, we show that adding a localised forcing close to the actuator provides more stabilisation that forcing far downstream. The analysis of the controlled flow fields, confirms that modifying the velocity gradient close to the actuator is more efficient to stabilise the wake than controlling the sheared flow far downstream. An interesting application of these results is to provide guidelines for stabilising the wake of wind or tidal turbines when placed in an energy farm to minimise unsteady interactions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Molecular differentiation between races or closely related species is often incongruent with the reproductive divergence of the taxa of interest. Shared ancient polymorphism and/or introgression during secondary contact may be responsible for the incongruence. At loci contributing to speciation, these two complications should be minimized (1, 2); hence, their variation may more faithfully reflect the history of the species' reproductive differentiation. In this study, we analyzed DNA polymorphism at the Odysseus (OdsH) locus of hybrid sterility between Drosophila mauritiana and Drosophila simulans and were able to verify such a prediction. Interestingly, DNA variation only a short distance away (1.8 kb) appears not to be influenced by the forces that shape the recent evolution of the OdsH coding region. This locus thus may represent a test case of inferring phylogeny of very closely related species.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Plasma levels of corticosterone are often used as a measure of “stress” in wild animal populations. However, we lack conclusive evidence that different stress levels reflect different survival probabilities between populations. Galápagos marine iguanas offer an ideal test case because island populations are affected differently by recurring El Niño famine events, and population-level survival can be quantified by counting iguanas locally. We surveyed corticosterone levels in six populations during the 1998 El Niño famine and the 1999 La Niña feast period. Iguanas had higher baseline and handling stress-induced corticosterone concentrations during famine than feast conditions. Corticosterone levels differed between islands and predicted survival through an El Niño period. However, among individuals, baseline corticosterone was only elevated when body condition dropped below a critical threshold. Thus, the population-level corticosterone response was variable but nevertheless predicted overall population health. Our results lend support to the use of corticosterone as a rapid quantitative predictor of survival in wild animal populations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have demonstrated the assembly of two-dimensional patterns of functional antibodies on a surface. In particular, we have selectively adsorbed micrometer-scale regions of biotinylated immunoglobulin that exhibit specific antigen binding after adsorption. The advantage of this technique is its potential adaptability to adsorbing arbitrary proteins in tightly packed monolayers while retaining functionality. The procedure begins with the formation of a self-assembled monolayer of n-octadecyltrimethoxysilane (OTMS) on a silicon dioxide surface. This monolayer can then be selectively removed by UV photolithography. Under appropriate solution conditions, the OTMS regions will adsorb a monolayer of bovine serum albumin (BSA), while the silicon dioxide regions where the OTMS has been removed by UV light will adsorb less than 2% of a monolayer, thus creating high contrast patterned adsorption of BSA. The attachment of the molecule biotin to the BSA allows the pattern to be replicated in a layer of streptavidin, which bonds to the biotinylated BSA and in turn will bond an additional layer of an arbitrary biotinylated protein. In our test case, functionality of the biotinylated goat antibodies raised against mouse immunoglobulin was demonstrated by the specific binding of fluorescently labeled mouse IgG.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A capillary electrophoresis method has been developed to study DNA-protein complexes by mobility-shift assay. This method is at least 100 times more sensitive than conventional gel mobility-shift procedures. Key features of the technique include the use of a neutral coated capillary, a small amount of linear polymer in the separation medium, and use of covalently dye-labeled DNA probes that can be detected with a commercially available laser-induced fluorescence monitor. The capillary method provides quantitative data in runs requiring < 20 min, from which dissociation constants are readily determined. As a test case we studied interactions of a developmentally important sea urchin embryo transcription factor, SpP3A2. As little as 2-10 x 10(6) molecules of specific SpP3A2-oligonucleotide complex were reproducibly detected, using recombinant SpP3A2, crude nuclear extract, egg lysates, and even a single sea urchin egg lysed within the capillary column.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Monte Carlo simulation method for globular proteins, called extended-scaled-collective-variable (ESCV) Monte Carlo, is proposed. This method combines two Monte Carlo algorithms known as entropy-sampling and scaled-collective-variable algorithms. Entropy-sampling Monte Carlo is able to sample a large configurational space even in a disordered system that has a large number of potential barriers. In contrast, scaled-collective-variable Monte Carlo provides an efficient sampling for a system whose dynamics is highly cooperative. Because a globular protein is a disordered system whose dynamics is characterized by collective motions, a combination of these two algorithms could provide an optimal Monte Carlo simulation for a globular protein. As a test case, we have carried out an ESCV Monte Carlo simulation for a cell adhesive Arg-Gly-Asp-containing peptide, Lys-Arg-Cys-Arg-Gly-Asp-Cys-Met-Asp, and determined the conformational distribution at 300 K. The peptide contains a disulfide bridge between the two cysteine residues. This bond mimics the strong geometrical constraints that result from a protein's globular nature and give rise to highly cooperative dynamics. Computation results show that the ESCV Monte Carlo was not trapped at any local minimum and that the canonical distribution was correctly determined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As subsidiárias de corporações multinacionais sofrem pressões do ambiente interno e externo. Internamente competem por recursos e reconhecimento da matriz, externamente precisam se preocupar com os concorrentes e demais forças do mercado, de maneira que para desenvolver-se a subsidiária deve explorar as oportunidades de mercado e mostrar o potencial empreendedor que possui. Essas oportunidades podem estar num produto ou processo novo ou substancialmente aprimorado no qual a subsidiária teve o auxílio de um parceiro da rede em que está inserida. Particularmente, essa pesquisa analisa as inovações desenvolvidas localmente (dentro do país hospedeiro) através das subsidiárias instaladas no país, essas inovações são passíveis de serem transferidas para as suas matrizes e então utilizadas pelas demais subsidiárias espalhadas pelo mundo, tornando-se inovações globais. O foco principal deste estudo está em compreender a influência do empreendedorismo e das redes de empresas sobre o desenvolvimento e transferência dessas inovações. Para tanto, o presente estudo analisa uma amostra de 172 subsidiárias estrangeiras que operam no Brasil, a qual foi modelada utilizando-se a técnica de equações estruturais para o teste das hipóteses, mensuração do efeito mediador e comparação multigrupos visando avaliar o efeito moderador referente ao porte das subsidiárias. Os resultados sugerem que o empreendedorismo da subsidiária exerce influência significativa sobre o desenvolvimento das parcerias e consequente enraizamento da subsidiária na rede de empresas do mercado emergente, esse enraizamento na rede é um fator determinante para o desenvolvimento de inovações na subsidiária as quais podem ser transferidas para a matriz e então tornarem-se inovações globais. A partir desses resultados, a pesquisa contribui para um maior entendimento dos direcionadores de inovação nas subsidiárias e aprofunda a discussão sobre o desenvolvimento de inovações globais, particularmente àquelas provenientes de mercados emergentes.