906 resultados para computational models


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The accurate determination of non-linear shear behaviour and fracture toughness of continuous carbon-fibre/polymer composites remains a considerable challenge. These measurements are often necessary to generate material parameters for advanced computational damage models. In particular, there is a dearth of detailed shear fracture toughness characterisation for thermoplastic composites which are increasingly generating renewed interest within the aerospace and automotive sectors. In this work, carbon fibre (AS4)/ thermoplastic Polyetherketoneketone (PEKK) composite V-notched cross-ply specimens were manufactured to investigate their non-linear response under pure shear loading. Both monotonic and cyclic loading were applied to study the shear modulus degradation and progressive failure. For the first time in the reported literature, we use the essential work of fracture approach to measure the shear fracture toughness of continuous fibre reinforced composite laminates. Excellent geometric similarity in the load-displacement curves was observed for ligament-scaled specimens. The laminate fracture toughness was determined by linear regression, of the specific work of fracture values, to zero ligament thickness, and verified with computational models. The matrix intralaminar fracture toughness (ply level fracture toughness), associated with shear loading was determined by the area method. This paper also details the numerical implementation of a new three-dimensional phenomenological model for carbon fibre thermoplastic composites using the measured values, which is able to accurately represent the full non-linear mechanical response and fracture process. The constitutive model includes a new non-linear shear profile, shear modulus degradation and load reversal. It is combined with a smeared crack model for representing ply-level damage initiation and propagation. The model is shown to accurately predict the constitutive response in terms of permanent plastic strain, degraded modulus as well as load reversal. Predictions are also shown to compare favourably with the evolution of damage leading to final fracture.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O presente trabalho tem como objectivo o estudo, desenvolvimento e aplicações na área da biomecânica de sensores intrínsecos baseados em redes de Bragg em fibras ópticas (FBG). As aplicações são feitas em modelos biomecânicos in vitro tais como: implantes de anca, prótese de joelho, placas de osteossíntese e implantes dentários. A optimização do desenvolvimento de próteses e respectivos elementos de fixação é actualmente dependente da geração e validação experimental de seus modelos computacionais. A validação destes modelos é normalmente feita utilizando-se dados de ensaios não invasivos e invasivos em modelos sintéticos. Em ensaios in vitro os sensores convencionais têm um princípio de funcionamento eléctrico e apresentam por vezes dimensões inadequadas. Existem situações exploradas no presente trabalho, tais como sensoriamento de superfícies irregulares e junções ou ainda análises de deformações internas, onde é recomendável a utilização de sensores FBG, pois apresentam dimensões reduzidas e flexibilidade o que permite efectuar medidas localizadas. O desenvolvimento de um protocolo de utilização de FBG e a sua aplicação no contexto apresentado demonstrou-se mais adequado, pela precisão e segurança futura oferecidas. Foi desenvolvida uma metodologia experimental para medidas de deformações utilizando FBG ao longo de uma placa de osteossíntese metálica aparafusada a um fémur sintético fracturado. Foi efectuada a monitorização da cura do cimento ósseo utilizado como fixador do prato tibial na artroplastia total do joelho através da medida da sua contracção e temperatura. Foi também desenvolvido um sistema refrigerador com resposta às leituras de temperatura com vista a evitar a necrose do osso. Foram efectuados estudos de deformação nesse cimento após a sua cura, como resultado da aplicação de cargas mecânicas estáticas. Foram efectuados estudos da cura de cimento ósseo aplicado a próteses de anca e também de deformações nestas próteses. Foi ainda efectuado o estudo comparativo de vários implantes dentários através da medida da distribuição de deformações como resposta a excitações mecânicas impulsivas. Para a desmodulação das FBG foram inicialmente utilizados sistemas comerciais. Entretanto algumas aplicações não puderam ser implementadas com estes sistemas comerciais devido à baixa reflectividade das FBG utilizadas, mas fundamentalmente devido à necessidade de executar testes com uma taxa de aquisição maior do que os 5 Hz disponíveis (cerca de 15 kHz). Por estes motivos foi desenvolvido um sistema optoelectrónico completo de desmodulação de FBG baseado num filtro sintonizável e que tem como característica principal a alta taxa de aquisição (até 1,2 MHz) mas também se destaca pela facilidade na reconfiguração dos parâmetros de leitura, pela apresentação duma interface de utilizador amigável e pela capacidade de operar com até 5 FBG na mesma fibra óptica.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Painterly rendering has been linked to computer vision, but we propose to link it to human vision because perception and painting are two processes that are interwoven. Recent progress in developing computational models allows to establish this link. We show that completely automatic rendering can be obtained by applying four image representations in the visual system: (1) colour constancy can be used to correct colours, (2) coarse background brightness in combination with colour coding in cytochrome-oxidase blobs can be used to create a background with a big brush, (3) the multi-scale line and edge representation provides a very natural way to render fi ner brush strokes, and (4) the multi-scale keypoint representation serves to create saliency maps for Focus-of-Attention, and FoA can be used to render important structures. Basic processes are described, renderings are shown, and important ideas for future research are discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tese de dout., Engenharia Electrónica e de Computadores, Faculdade de Ciência e Tecnologia, Universidade do Algarve, 2007

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A prominent hypothesis states that specialized neural modules within the human lateral frontopolar cortices (LFPCs) support “relational integration” (RI), the solving of complex problems using inter-related rules. However, it has been proposed that LFPC activity during RI could reflect the recruitment of additional “domain-general” resources when processing more difficult problems in general as opposed to RI specifi- cally. Moreover, theoretical research with computational models has demonstrated that RI may be supported by dynamic processes that occur throughout distributed networks of brain regions as opposed to within a discrete computational module. Here, we present fMRI findings from a novel deductive reasoning paradigm that controls for general difficulty while manipulating RI demands. In accordance with the domain- general perspective, we observe an increase in frontoparietal activation during challenging problems in general as opposed to RI specifically. Nonetheless, when examining frontoparietal activity using analyses of phase synchrony and psychophysiological interactions, we observe increased network connectivity during RI alone. Moreover, dynamic causal modeling with Bayesian model selection identifies the LFPC as the effective connectivity source. Based on these results, we propose that during RI an increase in network connectivity and a decrease in network metastability allows rules that are coded throughout working memory systems to be dynamically bound. This change in connectivity state is top-down propagated via a hierarchical system of domain-general networks with the LFPC at the apex. In this manner, the functional network perspective reconciles key propositions of the globalist, modular, and computational accounts of RI within a single unified framework.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Trabalho Final de mestrado para obtenção do grau de Mestre em engenharia Mecância

Relevância:

60.00% 60.00%

Publicador:

Resumo:

É de senso comum que os nutrientes são fundamentais à vida e que a subnutrição leva à acumulação de gorduras que pode chegar a casos de obesidade, tendo como consequência inúmeras complicações de saúde. Diferentes estruturas neuroanatómicas do cérebro têm um papel essencial no comportamento digestivo. A ação de várias moléculas de sinalização (hormonas, neurotransmissores e neuropéptidos) resulta na regulação da ingestão de alimentos, sendo que, por exemplo, algumas hormonas podem aumentar a sensação de saciedade podendo diminuir o apetite e a ingestão calórica. A descoberta de hormonas envolvidas no balanço energético proporcionou novas oportunidades para desenvolver meios para o tratamento da obesidade. A transferência de medicação antiobesidade por via tópica ou transdérmica é um desafio pois a pele funciona como uma barreira natural e protetora. Sendo uma barreira com uma grande área de superfície e de fácil acessibilidade, a pele tem potencial interesse na libertação de fármacos específicos a cada terapia. Vários métodos têm sido estudados de modo a permitir um aumento da permeabilidade de moléculas terapêuticas para o interior da pele e através da pele. As biotecnologias transdérmicas são um campo de interesse cada vez maior, devido as aplicações dérmicas e farmacêuticas que lhes estão subjacentes. Existem vários modelos computacionais e matemáticos em uso que permite uma visão mais abrangente de puros dados experimentais e até mesmo permite a extrapolação prática de novas metodologias de difusão dérmica. Contudo, eles compreendem uma complexa variedade de teorias e suposições que atribuem a sua utilização para situações específicas. Este trabalho, primeiramente analisa, de forma extensiva, as várias metodologias teóricas para o estudo da difusão dérmica e sistematiza as suas características, realizando depois a fase prévia duma nova abordagem computacional para o estudo da difusão dérmica, que determina características microscópicas de moléculas capazes de provocar a perda de peso, tais como a Leptina e/ou os seus agonistas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Les systèmes Matériels/Logiciels deviennent indispensables dans tous les aspects de la vie quotidienne. La présence croissante de ces systèmes dans les différents produits et services incite à trouver des méthodes pour les développer efficacement. Mais une conception efficace de ces systèmes est limitée par plusieurs facteurs, certains d'entre eux sont: la complexité croissante des applications, une augmentation de la densité d'intégration, la nature hétérogène des produits et services, la diminution de temps d’accès au marché. Une modélisation transactionnelle (TLM) est considérée comme un paradigme prometteur permettant de gérer la complexité de conception et fournissant des moyens d’exploration et de validation d'alternatives de conception à des niveaux d’abstraction élevés. Cette recherche propose une méthodologie d’expression de temps dans TLM basée sur une analyse de contraintes temporelles. Nous proposons d'utiliser une combinaison de deux paradigmes de développement pour accélérer la conception: le TLM d'une part et une méthodologie d’expression de temps entre différentes transactions d’autre part. Cette synergie nous permet de combiner dans un seul environnement des méthodes de simulation performantes et des méthodes analytiques formelles. Nous avons proposé un nouvel algorithme de vérification temporelle basé sur la procédure de linéarisation des contraintes de type min/max et une technique d'optimisation afin d'améliorer l'efficacité de l'algorithme. Nous avons complété la description mathématique de tous les types de contraintes présentées dans la littérature. Nous avons développé des méthodes d'exploration et raffinement de système de communication qui nous a permis d'utiliser les algorithmes de vérification temporelle à différents niveaux TLM. Comme il existe plusieurs définitions du TLM, dans le cadre de notre recherche, nous avons défini une méthodologie de spécification et simulation pour des systèmes Matériel/Logiciel basée sur le paradigme de TLM. Dans cette méthodologie plusieurs concepts de modélisation peuvent être considérés séparément. Basée sur l'utilisation des technologies modernes de génie logiciel telles que XML, XSLT, XSD, la programmation orientée objet et plusieurs autres fournies par l’environnement .Net, la méthodologie proposée présente une approche qui rend possible une réutilisation des modèles intermédiaires afin de faire face à la contrainte de temps d’accès au marché. Elle fournit une approche générale dans la modélisation du système qui sépare les différents aspects de conception tels que des modèles de calculs utilisés pour décrire le système à des niveaux d’abstraction multiples. En conséquence, dans le modèle du système nous pouvons clairement identifier la fonctionnalité du système sans les détails reliés aux plateformes de développement et ceci mènera à améliorer la "portabilité" du modèle d'application.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis summarizes the results on the studies on a syntax based approach for translation between Malayalam, one of Dravidian languages and English and also on the development of the major modules in building a prototype machine translation system from Malayalam to English. The development of the system is a pioneering effort in Malayalam language unattempted by previous researchers. The computational models chosen for the system is first of its kind for Malayalam language. An in depth study has been carried out in the design of the computational models and data structures needed for different modules: morphological analyzer , a parser, a syntactic structure transfer module and target language sentence generator required for the prototype system. The generation of list of part of speech tags, chunk tags and the hierarchical dependencies among the chunks required for the translation process also has been done. In the development process, the major goals are: (a) accuracy of translation (b) speed and (c) space. Accuracy-wise, smart tools for handling transfer grammar and translation standards including equivalent words, expressions, phrases and styles in the target language are to be developed. The grammar should be optimized with a view to obtaining a single correct parse and hence a single translated output. Speed-wise, innovative use of corpus analysis, efficient parsing algorithm, design of efficient Data Structure and run-time frequency-based rearrangement of the grammar which substantially reduces the parsing and generation time are required. The space requirement also has to be minimised

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Computational models are arising is which programs are constructed by specifying large networks of very simple computational devices. Although such models can potentially make use of a massive amount of concurrency, their usefulness as a programming model for the design of complex systems will ultimately be decided by the ease in which such networks can be programmed (constructed). This thesis outlines a language for specifying computational networks. The language (AFL-1) consists of a set of primitives, ad a mechanism to group these elements into higher level structures. An implementation of this language runs on the Thinking Machines Corporation, Connection machine. Two significant examples were programmed in the language, an expert system (CIS), and a planning system (AFPLAN). These systems are explained and analyzed in terms of how they compare with similar systems written in conventional languages.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

During fatigue tests of cortical bone specimens, at the unload portion of the cycle (zero stress) non-zero strains occur and progressively accumulate as the test progresses. This non-zero strain is hypothesised to be mostly, if not entirely, describable as creep. This work examines the rate of accumulation of this strain and quantifies its stress dependency. A published relationship determined from creep tests of cortical bone (Journal of Biomechanics 21 (1988) 623) is combined with knowledge of the stress history during fatigue testing to derive an expression for the amount of creep strain in fatigue tests. Fatigue tests on 31 bone samples from four individuals showed strong correlations between creep strain rate and both stress and “normalised stress” (σ/E) during tensile fatigue testing (0–T). Combined results were good (r2=0.78) and differences between the various individuals, in particular, vanished when effects were examined against normalised stress values. Constants of the regression showed equivalence to constants derived in creep tests. The universality of the results, with respect to four different individuals of both sexes, shows great promise for use in computational models of fatigue in bone structures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the most pervading concepts underlying computational models of information processing in the brain is linear input integration of rate coded uni-variate information by neurons. After a suitable learning process this results in neuronal structures that statically represent knowledge as a vector of real valued synaptic weights. Although this general framework has contributed to the many successes of connectionism, in this paper we argue that for all but the most basic of cognitive processes, a more complex, multi-variate dynamic neural coding mechanism is required - knowledge should not be spacially bound to a particular neuron or group of neurons. We conclude the paper with discussion of a simple experiment that illustrates dynamic knowledge representation in a spiking neuron connectionist system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Terahertz (THz) frequency radiation, 0.1 THz to 20 THz, is being investigated for biomedical imaging applications following the introduction of pulsed THz sources that produce picosecond pulses and function at room temperature. Owing to the broadband nature of the radiation, spectral and temporal information is available from radiation that has interacted with a sample; this information is exploited in the development of biomedical imaging tools and sensors. In this work, models to aid interpretation of broadband THz spectra were developed and evaluated. THz radiation lies on the boundary between regions best considered using a deterministic electromagnetic approach and those better analysed using a stochastic approach incorporating quantum mechanical effects, so two computational models to simulate the propagation of THz radiation in an absorbing medium were compared. The first was a thin film analysis and the second a stochastic Monte Carlo model. The Cole–Cole model was used to predict the variation with frequency of the physical properties of the sample and scattering was neglected. The two models were compared with measurements from a highly absorbing water-based phantom. The Monte Carlo model gave a prediction closer to experiment over 0.1 to 3 THz. Knowledge of the frequency-dependent physical properties, including the scattering characteristics, of the absorbing media is necessary. The thin film model is computationally simple to implement but is restricted by the geometry of the sample it can describe. The Monte Carlo framework, despite being initially more complex, provides greater flexibility to investigate more complicated sample geometries.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The different triplet sequences in high molecular weight aromatic copolyimides comprising pyromellitimide units ("I") flanked by either ether-ketone ("K") or ether-sulfone residues ("S") show different binding strengths for pyrene-based tweezer-molecules. Such molecules bind primarily to the diimide unit through complementary π-π-stacking and hydrogen bonding. However, as shown by the magnitudes of 1H NMR complexation shifts and tweezer-polymer binding constants, the triplet "SIS" binds tweezer-molecules more strongly than "KIS" which in turn bind such molecules more strongly than "KIK". Computational models for tweezer-polymer binding, together with single-crystal X-ray analyses of tweezer-complexes with macrocyclic ether-imides, reveal that the variations in binding strength between the different triplet sequences arise from the different conformational preferences of aromatic rings at diarylketone and diarylsulfone linkages. These preferences determine whether or not chain-folding and secondary π−π-stacking occurs between the arms of the tweezermolecule and the 4,4'-biphenylene units which flank the central diimide residue.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Volume determination of tephra deposits is necessary for the assessment of the dynamics and hazards of explosive volcanoes. Several methods have been proposed during the past 40 years that include the analysis of crystal concentration of large pumices, integrations of various thinning relationships, and the inversion of field observations using analytical and computational models. Regardless of their strong dependence on tephra-deposit exposure and distribution of isomass/isopach contours, empirical integrations of deposit thinning trends still represent the most widely adopted strategy due to their practical and fast application. The most recent methods involve the best fitting of thinning data using various exponential seg- ments or a power-law curve on semilog plots of thickness (or mass/area) versus square root of isopach area. The exponential method is mainly sensitive to the number and the choice of straight segments, whereas the power-law method can better reproduce the natural thinning of tephra deposits but is strongly sensitive to the proximal or distal extreme of integration. We analyze a large data set of tephra deposits and propose a new empirical method for the deter- mination of tephra-deposit volumes that is based on the integration of the Weibull function. The new method shows a better agreement with observed data, reconciling the debate on the use of the exponential versus power-law method. In fact, the Weibull best fitting only depends on three free parameters, can well reproduce the gradual thinning of tephra deposits, and does not depend on the choice of arbitrary segments or of arbitrary extremes of integration.