980 resultados para clustering, free-form, ottimizzazione, remeshing
Resumo:
Context. The X-ray spectra observed in the persistent emission of magnetars are evidence for the existence of a magnetosphere. The high-energy part of the spectra is explained by resonant cyclotron upscattering of soft thermal photons in a twisted magnetosphere, which has motivated an increasing number of efforts to improve and generalize existing magnetosphere models. Aims. We want to build more general configurations of twisted, force-free magnetospheres as a first step to understanding the role played by the magnetic field geometry in the observed spectra. Methods. First we reviewed and extended previous analytical works to assess the viability and limitations of semi-analytical approaches. Second, we built a numerical code able to relax an initial configuration of a nonrotating magnetosphere to a force-free geometry, provided any arbitrary form of the magnetic field at the star surface. The numerical code is based on a finite-difference time-domain, divergence-free, and conservative scheme, based of the magneto-frictional method used in other scenarios. Results. We obtain new numerical configurations of twisted magnetospheres, with distributions of twist and currents that differ from previous analytical solutions. The range of global twist of the new family of solutions is similar to the existing semi-analytical models (up to some radians), but the achieved geometry may be quite different. Conclusions. The geometry of twisted, force-free magnetospheres shows a wider variety of possibilities than previously considered. This has implications for the observed spectra and opens the possibility of implementing alternative models in simulations of radiative transfer aiming at providing spectra to be compared with observations.
Resumo:
Mode of access: Internet.
Resumo:
In contents of v. 2, the numbering by parts and sections (found in numbers issued 1913-16 only) is not here indicated. "Aids in high school teaching," v. 2, is wrongly paged, bearing the same paging as "How to use a library," which it should follow.
Resumo:
Fundamental principles of precaution are legal maxims that ask for preventive actions, perhaps as contingent interim measures while relevant information about causality and harm remains unavailable, to minimize the societal impact of potentially severe or irreversible outcomes. Such principles do not explain how to make choices or how to identify what is protective when incomplete and inconsistent scientific evidence of causation characterizes the potential hazards. Rather, they entrust lower jurisdictions, such as agencies or authorities, to make current decisions while recognizing that future information can contradict the scientific basis that supported the initial decision. After reviewing and synthesizing national and international legal aspects of precautionary principles, this paper addresses the key question: How can society manage potentially severe, irreversible or serious environmental outcomes when variability, uncertainty, and limited causal knowledge characterize their decision-making? A decision-analytic solution is outlined that focuses on risky decisions and accounts for prior states of information and scientific beliefs that can be updated as subsequent information becomes available. As a practical and established approach to causal reasoning and decision-making under risk, inherent to precautionary decision-making, these (Bayesian) methods help decision-makers and stakeholders because they formally account for probabilistic outcomes, new information, and are consistent and replicable. Rational choice of an action from among various alternatives-defined as a choice that makes preferred consequences more likely-requires accounting for costs, benefits and the change in risks associated with each candidate action. Decisions under any form of the precautionary principle reviewed must account for the contingent nature of scientific information, creating a link to the decision-analytic principle of expected value of information (VOI), to show the relevance of new information, relative to the initial ( and smaller) set of data on which the decision was based. We exemplify this seemingly simple situation using risk management of BSE. As an integral aspect of causal analysis under risk, the methods developed in this paper permit the addition of non-linear, hormetic dose-response models to the current set of regulatory defaults such as the linear, non-threshold models. This increase in the number of defaults is an important improvement because most of the variants of the precautionary principle require cost-benefit balancing. Specifically, increasing the set of causal defaults accounts for beneficial effects at very low doses. We also show and conclude that quantitative risk assessment dominates qualitative risk assessment, supporting the extension of the set of default causal models.
Resumo:
Motivation: The clustering of gene profiles across some experimental conditions of interest contributes significantly to the elucidation of unknown gene function, the validation of gene discoveries and the interpretation of biological processes. However, this clustering problem is not straightforward as the profiles of the genes are not all independently distributed and the expression levels may have been obtained from an experimental design involving replicated arrays. Ignoring the dependence between the gene profiles and the structure of the replicated data can result in important sources of variability in the experiments being overlooked in the analysis, with the consequent possibility of misleading inferences being made. We propose a random-effects model that provides a unified approach to the clustering of genes with correlated expression levels measured in a wide variety of experimental situations. Our model is an extension of the normal mixture model to account for the correlations between the gene profiles and to enable covariate information to be incorporated into the clustering process. Hence the model is applicable to longitudinal studies with or without replication, for example, time-course experiments by using time as a covariate, and to cross-sectional experiments by using categorical covariates to represent the different experimental classes. Results: We show that our random-effects model can be fitted by maximum likelihood via the EM algorithm for which the E(expectation) and M(maximization) steps can be implemented in closed form. Hence our model can be fitted deterministically without the need for time-consuming Monte Carlo approximations. The effectiveness of our model-based procedure for the clustering of correlated gene profiles is demonstrated on three real datasets, representing typical microarray experimental designs, covering time-course, repeated-measurement and cross-sectional data. In these examples, relevant clusters of the genes are obtained, which are supported by existing gene-function annotation. A synthetic dataset is considered too.
Resumo:
In this paper we present an efficient k-Means clustering algorithm for two dimensional data. The proposed algorithm re-organizes dataset into a form of nested binary tree*. Data items are compared at each node with only two nearest means with respect to each dimension and assigned to the one that has the closer mean. The main intuition of our research is as follows: We build the nested binary tree. Then we scan the data in raster order by in-order traversal of the tree. Lastly we compare data item at each node to the only two nearest means to assign the value to the intendant cluster. In this way we are able to save the computational cost significantly by reducing the number of comparisons with means and also by the least use to Euclidian distance formula. Our results showed that our method can perform clustering operation much faster than the classical ones. © Springer-Verlag Berlin Heidelberg 2005
Resumo:
Agrin is a proteoglycan secreted by motor neurite terminals that functions to initiate and maintain AChR clusters at the nerve terminal. This led to the theory that neurite terminals decide where neuromuscular synapses form by secreting agrin. However, initiation of AChR clustering occurs in the absence of the innervating motoneuron and in the absence of agrin. In this instance, the muscle, not the nerve, is deciding the location of neuromuscular synapses by drawing neurite terminals towards pre-existing AChR clusters. If this were true, one would expect the initial innervation patterns to be the same in agrin-deficient mice and wild-type mice. To test this we quantified the intramuscular axonal branching and synapse formation in the diaphragm at E14.5 in agrin-deficient mice and wild-type mice. Heterozygote mothers were anaesthetised with Nembutal (30 mg) and killed via cervical dislocation. In the diaphragm, the nerve trunk runs down the centre of the muscle and extends branches primarily toward the lateral side. In agrin-deficient mice however, we found significantly more branches exited the phrenic nerve trunk, branched in the periphery and extended further on the medial side. Moreover, we found that the percentage α-bungarotoxin/synaptophysin colocalisations, markers of pre- and postsynaptic differentiation, respectively, was the same in agrin-deficient mice and wild-type mice. These results show that initial innervation patterns are not the same in agrin-deficient mice and wild-type mice indicating neurite terminals, not muscle, decide the placement of neuromuscular synapses in the absence of agrin.
Resumo:
La riduzione dei consumi di combustibili fossili e lo sviluppo di tecnologie per il risparmio energetico sono una questione di centrale importanza sia per l’industria che per la ricerca, a causa dei drastici effetti che le emissioni di inquinanti antropogenici stanno avendo sull’ambiente. Mentre un crescente numero di normative e regolamenti vengono emessi per far fronte a questi problemi, la necessità di sviluppare tecnologie a basse emissioni sta guidando la ricerca in numerosi settori industriali. Nonostante la realizzazione di fonti energetiche rinnovabili sia vista come la soluzione più promettente nel lungo periodo, un’efficace e completa integrazione di tali tecnologie risulta ad oggi impraticabile, a causa sia di vincoli tecnici che della vastità della quota di energia prodotta, attualmente soddisfatta da fonti fossili, che le tecnologie alternative dovrebbero andare a coprire. L’ottimizzazione della produzione e della gestione energetica d’altra parte, associata allo sviluppo di tecnologie per la riduzione dei consumi energetici, rappresenta una soluzione adeguata al problema, che può al contempo essere integrata all’interno di orizzonti temporali più brevi. L’obiettivo della presente tesi è quello di investigare, sviluppare ed applicare un insieme di strumenti numerici per ottimizzare la progettazione e la gestione di processi energetici che possa essere usato per ottenere una riduzione dei consumi di combustibile ed un’ottimizzazione dell’efficienza energetica. La metodologia sviluppata si appoggia su un approccio basato sulla modellazione numerica dei sistemi, che sfrutta le capacità predittive, derivanti da una rappresentazione matematica dei processi, per sviluppare delle strategie di ottimizzazione degli stessi, a fronte di condizioni di impiego realistiche. Nello sviluppo di queste procedure, particolare enfasi viene data alla necessità di derivare delle corrette strategie di gestione, che tengano conto delle dinamiche degli impianti analizzati, per poter ottenere le migliori prestazioni durante l’effettiva fase operativa. Durante lo sviluppo della tesi il problema dell’ottimizzazione energetica è stato affrontato in riferimento a tre diverse applicazioni tecnologiche. Nella prima di queste è stato considerato un impianto multi-fonte per la soddisfazione della domanda energetica di un edificio ad uso commerciale. Poiché tale sistema utilizza una serie di molteplici tecnologie per la produzione dell’energia termica ed elettrica richiesta dalle utenze, è necessario identificare la corretta strategia di ripartizione dei carichi, in grado di garantire la massima efficienza energetica dell’impianto. Basandosi su un modello semplificato dell’impianto, il problema è stato risolto applicando un algoritmo di Programmazione Dinamica deterministico, e i risultati ottenuti sono stati comparati con quelli derivanti dall’adozione di una più semplice strategia a regole, provando in tal modo i vantaggi connessi all’adozione di una strategia di controllo ottimale. Nella seconda applicazione è stata investigata la progettazione di una soluzione ibrida per il recupero energetico da uno scavatore idraulico. Poiché diversi layout tecnologici per implementare questa soluzione possono essere concepiti e l’introduzione di componenti aggiuntivi necessita di un corretto dimensionamento, è necessario lo sviluppo di una metodologia che permetta di valutare le massime prestazioni ottenibili da ognuna di tali soluzioni alternative. Il confronto fra i diversi layout è stato perciò condotto sulla base delle prestazioni energetiche del macchinario durante un ciclo di scavo standardizzato, stimate grazie all’ausilio di un dettagliato modello dell’impianto. Poiché l’aggiunta di dispositivi per il recupero energetico introduce gradi di libertà addizionali nel sistema, è stato inoltre necessario determinare la strategia di controllo ottimale dei medesimi, al fine di poter valutare le massime prestazioni ottenibili da ciascun layout. Tale problema è stato di nuovo risolto grazie all’ausilio di un algoritmo di Programmazione Dinamica, che sfrutta un modello semplificato del sistema, ideato per lo scopo. Una volta che le prestazioni ottimali per ogni soluzione progettuale sono state determinate, è stato possibile effettuare un equo confronto fra le diverse alternative. Nella terza ed ultima applicazione è stato analizzato un impianto a ciclo Rankine organico (ORC) per il recupero di cascami termici dai gas di scarico di autovetture. Nonostante gli impianti ORC siano potenzialmente in grado di produrre rilevanti incrementi nel risparmio di combustibile di un veicolo, è necessario per il loro corretto funzionamento lo sviluppo di complesse strategie di controllo, che siano in grado di far fronte alla variabilità della fonte di calore per il processo; inoltre, contemporaneamente alla massimizzazione dei risparmi di combustibile, il sistema deve essere mantenuto in condizioni di funzionamento sicure. Per far fronte al problema, un robusto ed efficace modello dell’impianto è stato realizzato, basandosi sulla Moving Boundary Methodology, per la simulazione delle dinamiche di cambio di fase del fluido organico e la stima delle prestazioni dell’impianto. Tale modello è stato in seguito utilizzato per progettare un controllore predittivo (MPC) in grado di stimare i parametri di controllo ottimali per la gestione del sistema durante il funzionamento transitorio. Per la soluzione del corrispondente problema di ottimizzazione dinamica non lineare, un algoritmo basato sulla Particle Swarm Optimization è stato sviluppato. I risultati ottenuti con l’adozione di tale controllore sono stati confrontati con quelli ottenibili da un classico controllore proporzionale integrale (PI), mostrando nuovamente i vantaggi, da un punto di vista energetico, derivanti dall’adozione di una strategia di controllo ottima.
Resumo:
The spatial distribution patterns of the diffuse, primitive, and classic beta-amyloid (Abeta) deposits were studied in areas of the medial temporal lobe in 12 cases of Down's Syndrome (DS) 35 to 67 years of age. Large clusters of diffuse deposits were present in the youngest patients; cluster size then declined with patient age but increased again in the oldest patients. By contrast, the cluster sizes of the primitive and classic deposits increased with age to a maximum in patients 45 to 55 and 60 years of age respectively and declined in size in the oldest patients. In the parahippocampal gyrus (PHG), the clusters of the primitive deposits were most highly clustered in cases of intermediate age. The data suggest a developmental sequence in DS in which Abeta is deposited initially in the form of large clusters of diffuse deposits that are then gradually replaced by clusters of primitive and classic deposits. The oldest patients were an exception to this sequence in that the pattern of clustering resembled that of the youngest patients.
Resumo:
Lead in petrol has been identified as a health hazard and attempts are being made to create a lead-free atmosphere. Through an intensive study a review is made of the various options available to the automobile and petroleum industry. The economic and atmospheric penalties coupled with automobile fuel consumption trends are calculated and presented in both graphical and tabulated form. Experimental measurements of carbon monoxide and hydrocarbon emissions are also presented for certain selected fuels. Reduction in CO and HC's with the employment of a three-way catalyst is also discussed. All tests were carried out on a Fiat 127A engine at wide open throttle and standard timing setting. A Froude dynamometer was used to vary engine speed. With the introduction of lead-free petrol, interest in combustion chamber deposits in spark ignition engines has ben renewed. These deposits cause octane requirement increase or rise in engine knock and decreased volumetric efficiency. The detrimental effect of the deposits has been attributed to the physical volume of the deposit and to changes in heat transfer. This study attempts to assess why leaded deposits, though often greater in mass and volume, yield relatively lower ORI when compared to lead-free deposits under identical operating conditions. This has been carried out by identifying the differences in the physical nature of the deposit and then through measurement of the thermal conductivity and permeability of the deposits. The measured thermal conductivity results are later used in a mathematical model to determine heat transfer rates and temperature variation across the engine wall and deposit. For the model, the walls of the combustion cylinder and top are assumed to be free of engine deposit, the major deposit being on the piston head. Seven different heat transfer equations are formulated describing heat flow at each part of the four stroke cycle, and the variation of cylinder wall area exposed to gas mixture is accounted for. The heat transfer equations are solved using numerical methods and temperature variations across the wall identified. Though the calculations have been carried out for one particular moment in the cycle, similar calculations are possible for every degree of the crank angle, and thus further information regarding location of maximum temperatures at every degree of the crank angle may also be determined. In conclusion, thermal conductivity values of leaded and lead-free deposits have been found. The fundamental concepts of a mathematical model with great potential have been formulated and it is hoped that with future work it may be used in a simulation for different engine construction materials and motor fuels, leading to better design of future prototype engines.
Resumo:
Changes in modern structural design have created a demand for products which are light but possess high strength. The objective is a reduction in fuel consumption and weight of materials to satisfy both economic and environmental criteria. Cold roll forming has the potential to fulfil this requirement. The bending process is controlled by the shape of the profile machined on the periphery of the rolls. A CNC lathe can machine complicated profiles to a high standard of precision, but the expertise of a numerical control programmer is required. A computer program was developed during this project, using the expert system concept, to calculate tool paths and consequently to expedite the procurement of the machine control tapes whilst removing the need for a skilled programmer. Codifying the expertise of a human and the encapsulation of knowledge within a computer memory, destroys the dependency on highly trained people whose services can be costly, inconsistent and unreliable. A successful cold roll forming operation, where the product is geometrically correct and free from visual defects, is not easy to attain. The geometry of the sheet after travelling through the rolling mill depends on the residual strains generated by the elastic-plastic deformation. Accurate evaluation of the residual strains can provide the basis for predicting the geometry of the section. A study of geometric and material non-linearity, yield criteria, material hardening and stress-strain relationships was undertaken in this research project. The finite element method was chosen to provide a mathematical model of the bending process and, to ensure an efficient manipulation of the large stiffness matrices, the frontal solution was applied. A series of experimental investigations provided data to compare with corresponding values obtained from the theoretical modelling. A computer simulation, capable of predicting that a design will be satisfactory prior to the manufacture of the rolls, would allow effort to be concentrated into devising an optimum design where costs are minimised.
Resumo:
Epitopes mediated by T cells lie at the heart of the adaptive immune response and form the essential nucleus of anti-tumour peptide or epitope-based vaccines. Antigenic T cell epitopes are mediated by major histocompatibility complex (MHC) molecules, which present them to T cell receptors. Calculating the affinity between a given MHC molecule and an antigenic peptide using experimental approaches is both difficult and time consuming, thus various computational methods have been developed for this purpose. A server has been developed to allow a structural approach to the problem by generating specific MHC:peptide complex structures and providing configuration files to run molecular modelling simulations upon them. A system has been produced which allows the automated construction of MHC:peptide structure files and the corresponding configuration files required to execute a molecular dynamics simulation using NAMD. The system has been made available through a web-based front end and stand-alone scripts. Previous attempts at structural prediction of MHC:peptide affinity have been limited due to the paucity of structures and the computational expense in running large scale molecular dynamics simulations. The MHCsim server (http://igrid-ext.cryst.bbk.ac.uk/MHCsim) allows the user to rapidly generate any desired MHC:peptide complex and will facilitate molecular modelling simulation of MHC complexes on an unprecedented scale.
Resumo:
Secretory IgA contributes to humoral defense mechanisms against pathogens targeting mucosal surfaces, and secretory component (SC) fulfills multiple roles in this defense. The aims of this study were to quantify total SC and to analyze the form of free SC in sputa from normal subjects, subjects with asthma, and subjects with cystic fibrosis (CF). Significantly higher levels of SC were detected in CF compared with both other groups. Gel filtration chromatography revealed that SC in CF was relatively degraded. Free SC normally binds interleukin (IL)-8 and inhibits its function. However, in CF sputa, IL-8 binding to intact SC was reduced. Analysis of the total carbohydrate content of free SC signified overglycosylation in CF compared with normal subjects and subjects with asthma. Monosaccharide composition analysis of free SC from CF subjects revealed overfucosylation and undersialylation, in agreement with the reported CF glycosylation phenotype. SC binding to IL-8 did not interfere with the binding of IL-8 to heparin, indicating distinct binding sites on IL-8 for negative regulation of function by SC and heparin. We suggest that defective structure and function of SC contribute to the characteristic sustained inflammatory response in the CF airways.
Resumo:
The objective of this study is to demonstrate using weak form partial differential equation (PDE) method for a finite-element (FE) modeling of a new constitutive relation without the need of user subroutine programming. The viscoelastic asphalt mixtures were modeled by the weak form PDE-based FE method as the examples in the paper. A solid-like generalized Maxwell model was used to represent the deforming mechanism of a viscoelastic material, the constitutive relations of which were derived and implemented in the weak form PDE module of Comsol Multiphysics, a commercial FE program. The weak form PDE modeling of viscoelasticity was verified by comparing Comsol and Abaqus simulations, which employed the same loading configurations and material property inputs in virtual laboratory test simulations. Both produced identical results in terms of axial and radial strain responses. The weak form PDE modeling of viscoelasticity was further validated by comparing the weak form PDE predictions with real laboratory test results of six types of asphalt mixtures with two air void contents and three aging periods. The viscoelastic material properties such as the coefficients of a Prony series model for the relaxation modulus were obtained by converting from the master curves of dynamic modulus and phase angle. Strain responses of compressive creep tests at three temperatures and cyclic load tests were predicted using the weak form PDE modeling and found to be comparable with the measurements of the real laboratory tests. It was demonstrated that the weak form PDE-based FE modeling can serve as an efficient method to implement new constitutive models and can free engineers from user subroutine programming.
Resumo:
2000 Mathematics Subject Classification: 17B01, 17B30, 17B40.