928 resultados para Modeling methods


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Anisotropic conductive film (ACF) which consists of an adhesive epoxy matrix and randomly distributed conductive particles are widely used as the connection material for electronic devices with high I/O counts. However, for the semiconductor industry the reliability of the ACF is still a major concern due to a lack of experimental reliability data. This paper reports the investigations into the moisture-induced failures in Flip-Chip-on-Flex interconnections with Anisotropic Conductive Films (ACFs). Both experimental and modeling methods were applied. In the experiments, the contact resistance was used as a quality indicator and was measured continuously during the accelerated tests (autoclave tests). The temperature, relative humidity and the pressure were set at 121°C, 100%RH, and 2atm respectively. The contact resistance of the ACF joints increased during the tests and nearly 25% of the joints were found to be open after 168 hours’ testing time. Visible conduction gaps between the adhesive and substrate pads were observed. Cracks at the adhesive/flex interface were also found. For a better understanding of the experimental results, 3-D Finite Element (FE) models were built and a macro-micro modeling method was used to determine the moisture diffusion and moisture-induced stresses inside the ACF joints. Modeling results are consistent with the findings in the experimental work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper reports the investigations into the moisture induced failures in flip-chip-on-flex interconnections with anisotropic conductive films (ACF). Both experimental and modeling methods were applied. In the experiments, the contact resistance was used as a quality indicator and was measured continuously during the accelerated tests (autoclave tests). The temperature, relative humidity and the pressure were set at 121°C, 100%RH, 1atm respectively. The contact resistance of the ACF joints increased during the tests and nearly 25% of the joints were found to be open after 168 hours' testing time. Visible conduction gaps between the adhesive and substrate pads were observed. Cracks at the adhesive/flex interface were also found. It is believed that the swelling effect of the adhesive and the water penetration along the adhesive/flex interface are the main causes of this contact degradation. Another finding from the experimental work was that the ACF interconnections that had undergone the reflow treatment were more sensitive to the moisture and showed worse reliability during the tests. For a better understanding of the experimental results, 3D finite element (FE) models were built and a macro-micro modeling method was used to determine the moisture diffusion and moisture-induced stresses inside the ACF joints. Modeling results are consistent with the findings in the experimental work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work describes the work of an investigation of the effects of solder reflow process on the reliability of anisotropic conductive film (ACF) interconnection for flip-chip on flex (FCOF) applications. Experiments as well as computer modeling methods have been used. The results show that the contact resistance of ACF interconnections increases after the reflow and the magnitude of the increase is strongly correlated to the peak reflow temperature. In fact, nearly 40 percent of the joints are open when the peak reflow temperature is 260°C, while there is no opening when the peak temperature is 210°C. It is believed that the coefficient of thermal expansion (CTE) mismatch between the polymer particle and the adhesive matrix is the main cause of this contact degradation. To understand this phenomenon better, a three-dimensional (3-D) finite element (FE) model of an ACF joint has been analyzed in order to predict the stress distribution in the conductive particles, adhesive matrix and metal pads during the reflow process. The stress level at the interface between the particle and its surrounding materials is significant and it is the highest at the interface between the particle and the adhesive matrix.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Structure-based modeling methods have been used to design a series of disubstituted triazole-linked acridine compounds with selectivity for human telomeric quadruplex DNAs. A focused library of these compounds was prepared using click chemistry and the selectivity concept was validated against two promoter quadruplexes from the c-kit gene with known molecular structures, as well as with duplex DNA using a FRET-based melting method. Lead compounds were found to have reduced effects on the thermal stability of the c-kit quadruplexes and duplex DNA structures. These effects were further explored with a series of competition experiments, which confirmed that binding to duplex DNA is very low even at high duplex:telomeric quadruplex ratios. Selectivity to the c-kit quadruplexes is more complex, with some evidence of their stabilization at increasing excess over human telomeric quadruplex DNA. Selectivity is a result of the dimensions of the triazole-acridine compounds; and in particular the separation of the two alkyl-amino terminal groups. Both lead compounds also have selective inhibitory effects on the proliferation of cancer cell lines compared to a normal cell line, and one has been shown to inhibit the activity of the telomerase enzyme, which is selectively expressed in tumor cells, where it plays a role in maintaining telomere integrity and cellular immortalization.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Models are an important part of many policy development processes, but meeting policy objectives relies on policy analysts engaging effectively with the modeling process and modelers understanding the policy issues. Furthermore, there are many different modeling methods, each with characteristics that potentially make it more or less suitable for analyzing a particular policy issue.
This paper presents a novel framework to assist policy analysts to engage with modelers so as to make the best use of models. The framework has three dimensions: Functionality, Accuracy and Feasibility. Functionality concerns ways in which modeling can be used to support broader policy objectives, such as promoting negotiation or comparing options. Accuracy concerns how to best represent the fundamental features of the system being modeled, and relies on selecting an appropriate technique. Feasibility concerns practical issues such as access to data and modeling skills.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O presente estudo diz respeito a um trabalho de pesquisa no âmbito de uma Tese de Mestrado compreendida no segundo ciclo de estudos do curso de Engenharia Geotécnica e Geoambiente, realizado sobre a contribuição da Fluorescência de Raios – X (FRX) no Zonamento de Georrecursos, com particular ênfase para a utilização do instrumento portátil e de ferramentas tecnológicas de vanguarda, indispensáveis à prospecção e exploração dos recursos minerais, designadamente na interpretação e integração de dados de natureza geológica e na modelação de métodos de exploração e processamento /tratamento de depósitos minerais, assim como do respectivo controlo. Esta dissertação discute os aspectos fundamentais da utilização da técnica de Fluorescência de Raios-X (portátil, FRXP), quanto à sua possibilidade de aplicação e metodologia exigida, com vista à definição de zonas com características químicas análogas do georrecurso e que preencham as exigências especificadas para a utilização da matéria-prima, nas indústrias consumidoras. Foi elaborada uma campanha de recolha de amostras de calcário proveniente da pedreira do Sangardão, em Condeixa–a–Nova, que numa primeira fase teve como objectivo principal a identificação da composição química da área em estudo e o grau de precisão do aparelho portátil de FRX. Para além desta análise foram, também, realizadas análises granulométricas por peneiração e sedimentação por Raios-X a amostras provenientes das bacias de sedimentação e do material passado no filtro prensa. Validado o método de análise por FRXP, realizou-se uma segunda fase deste trabalho, que consistiu na elaboração de uma amostragem bastante significativa de pontos, onde se realizaram análises por FRXP, de forma a obter uma maior cobertura química da área em estudo e localizar os locais chave de exploração da matéria-prima. Para uma correcta leitura dos dados analisados recorreu-se a ferramentas aliadas às novas tecnologias, as quais se mostraram um importante contributo para uma boa gestão do georrecurso em avaliação, nomeadamente o “XLSTAT” e o “Surfer” para tratamento estatístico dos dados e modelação, respectivamente.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertação para a obtenção de grau de doutor em Bioquímica pelo Instituto de Tecnologia Química e Biológica da Universidade Nova de Lisboa

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective: Imipenem is a broad spectrum antibiotic used to treat severe infections in critically ill patients. Imipenem pharmacokinetics (PK) was evaluated in a cohort of neonates treated in the Neonatal Intensive Care Unit of the Lausanne University Hospital. The objective of our study was to identify key demographic and clinical factors influencing imipenem exposure in this population. Method: PK data from neonates and infants with at least one imipenem concentration measured between 2002 and 2013 were analyzed applying population PK modeling methods. Measurement of plasma concentrations were performed upon the decision of the physician within the frame of a therapeutic drug monitoring (TDM) programme. Effects of demographic (sex, body weight, gestational age, postnatal age) and clinical factors (serum creatinine as a measure of kidney function; co-administration of furosemide, spironolactone, hydrochlorothiazide, vancomycin, metronidazole and erythromycin) on imipenem PK were explored. Model-based simulations were performed (with a median creatinine value of 46 μmol/l) to compare various dosing regimens with respect to their ability to maintain drug levels above predefined minimum inhibitory concentrations (MIC) for at least 40 % of the dosing interval. Results: A total of 144 plasma samples was collected in 68 neonates and infants, predominantly preterm newborns, with median gestational age of 27 weeks (24 - 41 weeks) and postnatal age of 21 days (2 - 153 days). A two-compartment model best characterized imipenem disposition. Actual body weight exhibited the greatest impact on PK parameters, followed by age (gestational age and postnatal age) and serum creatinine on clearance. They explain 19%, 9%, 14% and 9% of the interindividual variability in clearance respectively. Model-based simulations suggested that 15 mg/kg every 12 hours maintain drug concentrations over a MIC of 2 mg/l for at least 40% of the dosing interval during the first days of life, whereas neonates older than 14 days of life required a dose of 20 mg/kg every 12 hours. Conclusion: Dosing strategies based on body weight and post-natal age are recommended for imipenem in all critically ill neonates and infants. Most current guidelines seem adequate for newborns and TDM should be restricted to some particular clinical situations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dans un premier temps, nous avons modélisé la structure d’une famille d’ARN avec une grammaire de graphes afin d’identifier les séquences qui en font partie. Plusieurs autres méthodes de modélisation ont été développées, telles que des grammaires stochastiques hors-contexte, des modèles de covariance, des profils de structures secondaires et des réseaux de contraintes. Ces méthodes de modélisation se basent sur la structure secondaire classique comparativement à nos grammaires de graphes qui se basent sur les motifs cycliques de nucléotides. Pour exemplifier notre modèle, nous avons utilisé la boucle E du ribosome qui contient le motif Sarcin-Ricin qui a été largement étudié depuis sa découverte par cristallographie aux rayons X au début des années 90. Nous avons construit une grammaire de graphes pour la structure du motif Sarcin-Ricin et avons dérivé toutes les séquences qui peuvent s’y replier. La pertinence biologique de ces séquences a été confirmée par une comparaison des séquences d’un alignement de plus de 800 séquences ribosomiques bactériennes. Cette comparaison a soulevée des alignements alternatifs pour quelques unes des séquences que nous avons supportés par des prédictions de structures secondaires et tertiaires. Les motifs cycliques de nucléotides ont été observés par les membres de notre laboratoire dans l'ARN dont la structure tertiaire a été résolue expérimentalement. Une étude des séquences et des structures tertiaires de chaque cycle composant la structure du Sarcin-Ricin a révélé que l'espace des séquences dépend grandement des interactions entre tous les nucléotides à proximité dans l’espace tridimensionnel, c’est-à-dire pas uniquement entre deux paires de bases adjacentes. Le nombre de séquences générées par la grammaire de graphes est plus petit que ceux des méthodes basées sur la structure secondaire classique. Cela suggère l’importance du contexte pour la relation entre la séquence et la structure, d’où l’utilisation d’une grammaire de graphes contextuelle plus expressive que les grammaires hors-contexte. Les grammaires de graphes que nous avons développées ne tiennent compte que de la structure tertiaire et négligent les interactions de groupes chimiques spécifiques avec des éléments extra-moléculaires, comme d’autres macromolécules ou ligands. Dans un deuxième temps et pour tenir compte de ces interactions, nous avons développé un modèle qui tient compte de la position des groupes chimiques à la surface des structures tertiaires. L’hypothèse étant que les groupes chimiques à des positions conservées dans des séquences prédéterminées actives, qui sont déplacés dans des séquences inactives pour une fonction précise, ont de plus grandes chances d’être impliqués dans des interactions avec des facteurs. En poursuivant avec l’exemple de la boucle E, nous avons cherché les groupes de cette boucle qui pourraient être impliqués dans des interactions avec des facteurs d'élongation. Une fois les groupes identifiés, on peut prédire par modélisation tridimensionnelle les séquences qui positionnent correctement ces groupes dans leurs structures tertiaires. Il existe quelques modèles pour adresser ce problème, telles que des descripteurs de molécules, des matrices d’adjacences de nucléotides et ceux basé sur la thermodynamique. Cependant, tous ces modèles utilisent une représentation trop simplifiée de la structure d’ARN, ce qui limite leur applicabilité. Nous avons appliqué notre modèle sur les structures tertiaires d’un ensemble de variants d’une séquence d’une instance du Sarcin-Ricin d’un ribosome bactérien. L’équipe de Wool à l’université de Chicago a déjà étudié cette instance expérimentalement en testant la viabilité de 12 variants. Ils ont déterminé 4 variants viables et 8 létaux. Nous avons utilisé cet ensemble de 12 séquences pour l’entraînement de notre modèle et nous avons déterminé un ensemble de propriétés essentielles à leur fonction biologique. Pour chaque variant de l’ensemble d’entraînement nous avons construit des modèles de structures tertiaires. Nous avons ensuite mesuré les charges partielles des atomes exposés sur la surface et encodé cette information dans des vecteurs. Nous avons utilisé l’analyse des composantes principales pour transformer les vecteurs en un ensemble de variables non corrélées, qu’on appelle les composantes principales. En utilisant la distance Euclidienne pondérée et l’algorithme du plus proche voisin, nous avons appliqué la technique du « Leave-One-Out Cross-Validation » pour choisir les meilleurs paramètres pour prédire l’activité d’une nouvelle séquence en la faisant correspondre à ces composantes principales. Finalement, nous avons confirmé le pouvoir prédictif du modèle à l’aide d’un nouvel ensemble de 8 variants dont la viabilité à été vérifiée expérimentalement dans notre laboratoire. En conclusion, les grammaires de graphes permettent de modéliser la relation entre la séquence et la structure d’un élément structural d’ARN, comme la boucle E contenant le motif Sarcin-Ricin du ribosome. Les applications vont de la correction à l’aide à l'alignement de séquences jusqu’au design de séquences ayant une structure prédéterminée. Nous avons également développé un modèle pour tenir compte des interactions spécifiques liées à une fonction biologique donnée, soit avec des facteurs environnants. Notre modèle est basé sur la conservation de l'exposition des groupes chimiques qui sont impliqués dans ces interactions. Ce modèle nous a permis de prédire l’activité biologique d’un ensemble de variants de la boucle E du ribosome qui se lie à des facteurs d'élongation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Im Rahmen dieser Arbeit werden Modellbildungsverfahren zur echtzeitfähigen Simulation wichtiger Schadstoffkomponenten im Abgasstrom von Verbrennungsmotoren vorgestellt. Es wird ein ganzheitlicher Entwicklungsablauf dargestellt, dessen einzelne Schritte, beginnend bei der Ver-suchsplanung über die Erstellung einer geeigneten Modellstruktur bis hin zur Modellvalidierung, detailliert beschrieben werden. Diese Methoden werden zur Nachbildung der dynamischen Emissi-onsverläufe relevanter Schadstoffe des Ottomotors angewendet. Die abgeleiteten Emissionsmodelle dienen zusammen mit einer Gesamtmotorsimulation zur Optimierung von Betriebstrategien in Hybridfahrzeugen. Im ersten Abschnitt der Arbeit wird eine systematische Vorgehensweise zur Planung und Erstellung von komplexen, dynamischen und echtzeitfähigen Modellstrukturen aufgezeigt. Es beginnt mit einer physikalisch motivierten Strukturierung, die eine geeignete Unterteilung eines Prozessmodells in einzelne überschaubare Elemente vorsieht. Diese Teilmodelle werden dann, jeweils ausgehend von einem möglichst einfachen nominalen Modellkern, schrittweise erweitert und ermöglichen zum Abschluss eine robuste Nachbildung auch komplexen, dynamischen Verhaltens bei hinreichender Genauigkeit. Da einige Teilmodelle als neuronale Netze realisiert werden, wurde eigens ein Verfah-ren zur sogenannten diskreten evidenten Interpolation (DEI) entwickelt, das beim Training einge-setzt, und bei minimaler Messdatenanzahl ein plausibles, also evidentes Verhalten experimenteller Modelle sicherstellen kann. Zum Abgleich der einzelnen Teilmodelle wurden statistische Versuchs-pläne erstellt, die sowohl mit klassischen DoE-Methoden als auch mittels einer iterativen Versuchs-planung (iDoE ) generiert wurden. Im zweiten Teil der Arbeit werden, nach Ermittlung der wichtigsten Einflussparameter, die Model-strukturen zur Nachbildung dynamischer Emissionsverläufe ausgewählter Abgaskomponenten vor-gestellt, wie unverbrannte Kohlenwasserstoffe (HC), Stickstoffmonoxid (NO) sowie Kohlenmono-xid (CO). Die vorgestellten Simulationsmodelle bilden die Schadstoffkonzentrationen eines Ver-brennungsmotors im Kaltstart sowie in der anschließenden Warmlaufphase in Echtzeit nach. Im Vergleich zur obligatorischen Nachbildung des stationären Verhaltens wird hier auch das dynami-sche Verhalten des Verbrennungsmotors in transienten Betriebsphasen ausreichend korrekt darge-stellt. Eine konsequente Anwendung der im ersten Teil der Arbeit vorgestellten Methodik erlaubt, trotz einer Vielzahl von Prozesseinflussgrößen, auch hier eine hohe Simulationsqualität und Ro-bustheit. Die Modelle der Schadstoffemissionen, eingebettet in das dynamische Gesamtmodell eines Ver-brennungsmotors, werden zur Ableitung einer optimalen Betriebsstrategie im Hybridfahrzeug ein-gesetzt. Zur Lösung solcher Optimierungsaufgaben bieten sich modellbasierte Verfahren in beson-derer Weise an, wobei insbesondere unter Verwendung dynamischer als auch kaltstartfähiger Mo-delle und der damit verbundenen Realitätsnähe eine hohe Ausgabequalität erreicht werden kann.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En un mundo hiperconectado, dinámico y cargado de incertidumbre como el actual, los métodos y modelos analíticos convencionales están mostrando sus limitaciones. Las organizaciones requieren, por tanto, herramientas útiles que empleen tecnología de información y modelos de simulación computacional como mecanismos para la toma de decisiones y la resolución de problemas. Una de las más recientes, potentes y prometedoras es el modelamiento y la simulación basados en agentes (MSBA). Muchas organizaciones, incluidas empresas consultoras, emplean esta técnica para comprender fenómenos, hacer evaluación de estrategias y resolver problemas de diversa índole. Pese a ello, no existe (hasta donde conocemos) un estado situacional acerca del MSBA y su aplicación a la investigación organizacional. Cabe anotar, además, que por su novedad no es un tema suficientemente difundido y trabajado en Latinoamérica. En consecuencia, este proyecto pretende elaborar un estado situacional sobre el MSBA y su impacto sobre la investigación organizacional.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La computación evolutiva y muy especialmente los algoritmos genéticos son cada vez más empleados en las organizaciones para resolver sus problemas de gestión y toma de decisiones (Apoteker & Barthelemy, 2000). La literatura al respecto es creciente y algunos estados del arte han sido publicados. A pesar de esto, no hay un trabajo explícito que evalúe de forma sistemática el uso de los algoritmos genéticos en problemas específicos de los negocios internacionales (ejemplos de ello son la logística internacional, el comercio internacional, el mercadeo internacional, las finanzas internacionales o estrategia internacional). El propósito de este trabajo de grado es, por lo tanto, realizar un estado situacional de las aplicaciones de los algoritmos genéticos en los negocios internacionales.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: The present paper investigates the question of a suitable basic model for the number of scrapie cases in a holding and applications of this knowledge to the estimation of scrapie-ffected holding population sizes and adequacy of control measures within holding. Is the number of scrapie cases proportional to the size of the holding in which case it should be incorporated into the parameter of the error distribution for the scrapie counts? Or, is there a different - potentially more complex - relationship between case count and holding size in which case the information about the size of the holding should be better incorporated as a covariate in the modeling? Methods: We show that this question can be appropriately addressed via a simple zero-truncated Poisson model in which the hypothesis of proportionality enters as a special offset-model. Model comparisons can be achieved by means of likelihood ratio testing. The procedure is illustrated by means of surveillance data on classical scrapie in Great Britain. Furthermore, the model with the best fit is used to estimate the size of the scrapie-affected holding population in Great Britain by means of two capture-recapture estimators: the Poisson estimator and the generalized Zelterman estimator. Results: No evidence could be found for the hypothesis of proportionality. In fact, there is some evidence that this relationship follows a curved line which increases for small holdings up to a maximum after which it declines again. Furthermore, it is pointed out how crucial the correct model choice is when applied to capture-recapture estimation on the basis of zero-truncated Poisson models as well as on the basis of the generalized Zelterman estimator. Estimators based on the proportionality model return very different and unreasonable estimates for the population sizes. Conclusion: Our results stress the importance of an adequate modelling approach to the association between holding size and the number of cases of classical scrapie within holding. Reporting artefacts and speculative biological effects are hypothesized as the underlying causes of the observed curved relationship. The lack of adjustment for these artefacts might well render ineffective the current strategies for the control of the disease.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A greedy technique is proposed to construct parsimonious kernel classifiers using the orthogonal forward selection method and boosting based on Fisher ratio for class separability measure. Unlike most kernel classification methods, which restrict kernel means to the training input data and use a fixed common variance for all the kernel terms, the proposed technique can tune both the mean vector and diagonal covariance matrix of individual kernel by incrementally maximizing Fisher ratio for class separability measure. An efficient weighted optimization method is developed based on boosting to append kernels one by one in an orthogonal forward selection procedure. Experimental results obtained using this construction technique demonstrate that it offers a viable alternative to the existing state-of-the-art kernel modeling methods for constructing sparse Gaussian radial basis function network classifiers. that generalize well.