927 resultados para model-based autonomy


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The viscoelasticity of mammalian lung is determined by the mechanical properties and structural regulation of the airway smooth muscle (ASM). The exposure to polluted air may deteriorate these properties with harmful consequences to individual health. Formaldehyde (FA) is an important indoor pollutant found among volatile organic compounds. This pollutant permeates through the smooth muscle tissue forming covalent bonds between proteins in the extracellular matrix and intracellular protein structure changing mechanical properties of ASM and inducing asthma symptoms, such as airway hyperresponsiveness, even at low concentrations. In the experimental scenario, the mechanical effect of FA is the stiffening of the tissue, but the mechanism behind this effect is not fully w1derstood. Thus, the aim of this study is to reproduce the mechanical behavior of the ASM, such as contraction and stretching, under FA action or not. For this, it was created a two-dimensional viscoelastic network model based on Voronoi tessellation solved using Runge-Kutta method of fourth order. The equilibrium configuration was reached when the forces in different parts of the network were equal. This model simulates the mechanical behavior of ASM through of a network of dashpots and springs. This dashpot-spring mechanical coupling mimics the composition of the actomyosin machinery of ASM through the contraction of springs to a minimum length. We hypothesized that formation of covalent bonds, due to the FA action, can be represented in the model by a simple change in the elastic constant of the springs, while the action of methacholinc (MCh) reduce the equilibrium length of the spring. A sigmoid curve of tension as a function of MCh doses was obtained, showing increased tension when the muscle strip was exposed to FA. Our simulations suggest that FA, at a concentration of 0.1 ppm, can affect the elastic properties of the smooth muscle fibers by a factor of 120%. We also analyze the dynamic mechanical properties, observing the viscous and elastic behavior of the network. Finally, the proposed model, although simple, ir1corporates the phenomenology of both MCh and FA and reproduces experirnental results observed with ir1 vitro exposure of smooth muscle to .FA. Thus, this new mechanical approach incorporates several well know features of the contractile system of the cells ir1 a tissue level model. The model can also be used in different biological scales.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

THE TITLE OF MY THESIS IS THE ROLE OF THE IDEAS AND THEIR CHANGE IN HIGHER EDUCATION POLICY-MAKING PROCESSES FROM THE EIGHTIES TO PRESENT-DAY: THE CASES OF ENGLAND AND NEW ZEALAND IN COMPARATIVE PERSPECTIVE UNDER A THEORETICAL POINT OF VIEW, THE AIM OF MY WORK IS TO CARRY OUT A RESEARCH MODELLED ON THE CONSTRUCTIVIST THEORY. IT FOCUSES ON THE ANALYSIS OF THE IMPACT OF IDEAS ON THE PROCESSES OF POLICY MAKING BY MEANS OF EPISTEMIC COMMUNITIES, THINK TANKS AND VARIOUS SOCIOECONOMIC CONTEXTS THAT MAY HAVE PLAYED A KEY ROLE IN THE CONSTRUCTION OF THE DIFFERENT PATHS. FROM MY POINT OF VIEW IDEAS CONSTITUTE A PRIORITY RESEARCH FIELD WHICH IS WORTH ANALYSING SINCE THEIR ROLE IN POLICY MAKING PROCESSES HAS BEEN TRADITIONALLY RATHER UNEXPLORED. IN THIS CONTEXT AND WITH THE AIM OF DEVELOPING A RESEARCH STRAND BASED ON THE ROLE OF IDEAS, I INTEND TO CARRY ON MY STUDY UNDER THE PERSPECTIVE OF CHANGE. DEPENDING ON THE DATA AND INFORMATION THAT I COLLECTED I EVALUATED THE WEIGHT OF EACH OF THESE VARIABLES AND MAYBE OTHERS SUCH AS THE INSTITUTIONS AND THE INDIVIDUAL INTERESTS, WHICH MAY HAVE INFLUENCED THE FORMATION OF THE POLICY MAKING PROCESSES. UNDER THIS LIGHT, I PLANNED TO ADOPT THE QUALITATIVE METHODOLOGY OF RESEARCH WHICH I BELIEVE TO BE VERY EFFECTIVE AGAINST THE MORE DIFFICULT AND POSSIBLY REDUCTIVE APPLICATION OF QUANTITIVE DATA SETS. I RECKON THEREFORE THAT THE MOST APPROPRIATE TOOLS FOR INFORMATION PROCESSING INCLUDE CONTENT ANALYSIS, AND IN-DEPTH INTERVIEWS TO PERSONALITIES OF THE POLITICAL PANORAMA (ÉLITE OR NOT) WHO HAVE PARTICIPATED IN THE PROCESS OF HIGHER EDUCATION REFORM FROM THE EIGHTIES TO PRESENT-DAY. THE TWO CASES TAKEN INTO CONSIDERATION SURELY SET AN EXAMPLE OF RADICAL REFORM PROCESSES WHICH HAVE OCCURRED IN QUITE DIFFERENT CONTEXTS DETERMINED BY THE SOCIOECONOMIC CHARACTERISTICS AND THE TRAITS OF THE ÉLITE. IN NEW ZEALAND THE DESCRIBED PROCESS HAS TAKEN PLACE WITH A STEADY PACE AND A GOOD GRADE OF CONSEQUANTIALITY, IN LINE WTH THE REFORMS IN OTHER STATE DIVISIONS DRIVEN BY THE IDEAS OF THE NEW PUBLIC MANAGEMENT. CONTRARILY IN ENGLAND THE REFORMATIVE ACTION OF MARGARET THATCHER HAS ACQUIRED A VERY RADICAL CONNOTATION AS IT HAS BROUGHT INTO THE AMBIT OF HIGHER EDUCATION POLICY CONCEPTS LIKE EFFICIENCY, EXCELLENCE, RATIONALIZATION THAT WOULD CONTRAST WITH THE GENERALISTIC AND MASS-ORIENTED IDEAS THAT WERE FASHIONABLE DURING THE SEVENTIES. THE MISSION I INTEND TO ACCOMPLISH THORUGHOUT MY RESEARCH IS TO INVESTIGATE AND ANALYSE INTO MORE DEPTH THE DIFFERENCES THAT SEEM TO EMERGE FROM TWO CONTEXTS WHICH MOST OF THE LITERATURE REGARDS AS A SINGLE MODEL: THE ANGLO-SAXON MODEL. UNDER THIS LIGHT, THE DENSE ANALYSIS OF POLICY PROCESSES ALLOWED TO BRING OUT BOTH THE CONTROVERSIAL AND CONTRASTING ASPECTS OF THE TWO REALITIES COMPARED, AND THE ROLE AND WEIGHT OF VARIABLES SUCH AS IDEAS (MAIN VARIABLE), INSTITUTIONAL SETTINGS AND INDIVIDUAL INTERESTS ACTING IN EACH CONTEXT. THE CASES I MEAN TO ATTEND PRESENT PECULIAR ASPECTS WORTH DEVELOPING AN IN-DEPTH ANALYSIS, AN OUTLINE OF WHICH WILL BE PROVIDED IN THIS ABSTRACT. ENGLAND THE CONSERVATIVE GOVERNMENT, SINCE 1981, INTRODUCED RADICAL CHANGES IN THE SECTOR OF HIGHER EDUCATION: FIRST CUTTING DOWN ON STATE FUNDINGS AND THEN WITH THE CREATION OF AN INSTITUTION FOR THE PLANNING AND LEADERSHIP OF THE POLYTECHNICS (NON-UNIVERSITY SECTOR). AFTERWARDS THE SCHOOL REFORM BY MARGARET THATCHER IN 1988 RAISED TO A GREAT STIR ALL OVER EUROPE DUE TO BOTH ITS CONSIDERABLE INNOVATIVE IMPRINT AND THE STRONG ATTACK AGAINST THE PEDAGOGY OF THE ‘ACTIVE’ SCHOOLING AND PROGRESSIVE EDUCATION, UNTIL THEN RECOGNIZED AS A MERIT OF THE BRITISH PUBLIC SCHOOL. IN THE AMBIT OF UNIVERSITY EDUCATION THIS REFORM, TOGETHER WITH SIMILAR MEASURES BROUGHT IN DURING 1992, PUT INTO PRACTICE THE CONSERVATIVE PRINCIPLES THROUGH A SERIES OF ACTIONS THAT INCLUDED: THE SUPPRESSION OF THE IRREMOVABILITY PRINCIPLE FOR UNIVERSITY TEACHERS; THE INTRODUCTION OF STUDENT LOANS FOR LOW-INCOME STUDENTS AND THE CANCELLATION OF THE CLEAR DISTINCTION BETWEEN UNIVERSITIES AND POLYTECHNICS. THE POLICIES OF THE LABOUR MAJORITY OF MR BLAIR DID NOT QUITE DIVERGE FROM THE CONSERVATIVES’ POSITION. IN 2003 BLAIR’S CABINET RISKED TO BECOME A MINORITY RIGHT ON THE OCCASION OF AN IMPORTANT UNIVERSITY REFORM PROPOSAL. THIS PROPOSAL WOULD FORESEE THE AUTONOMY FOR THE UNIVERSITIES TO RAISE UP TO 3.000 POUNDS THE ENROLMENT FEES FOR STUDENTS (WHILE FORMERLY THE CEILING WAS 1.125 POUNDS). BLAIR HAD TO FACE INTERNAL OPPOSITION WITHIN HIS OWN PARTY IN RELATION TO A MEASURE THAT, ACCORDING TO THE 150 MPS PROMOTERS OF AN ADVERSE MOTION, HAD NOT BEEN INCLUDED IN THE ELECTORAL PROGRAMME AND WOULD RISK CREATING INCOME-BASED DISCRIMINATION AMONG STUDENTS. AS A MATTER OF FACT THE BILL FOCUSED ON THE INTRODUCTION OF VERY LOW-INTEREST STUDENT LOANS TO BE SETTLED ONLY WHEN THE STUDENT WOULD HAVE FOUND A REMUNERATED OCCUPATION (A SYSTEM ALREADY PROVIDED FOR BY THE AUSTRALIAN LEGISLATION). NEW ZEALAND CONTRARILY TO MANY OTHER COUNTRIES, NEW ZEALAND HAS ADOPTED A VERY WIDE VISION OF THE TERTIARY EDUCATION. IT INCLUDES IN FACT THE FULL EDUCATIONAL PROGRAMME THAT IS INTERNATIONALLY RECOGNIZED AS THE POST-SECONDARY EDUCATION. SHOULD WE SPOTLIGHT A PECULIARITY OF THE NEW ZEALAND TERTIARY EDUCATION POLICY THEN IT WOULD BE ‘CHANGE’. LOOKING AT THE REFORM HISTORY RELATED TO THE TERTIARY EDUCATION SYSTEM, WE CAN CLEARLY IDENTIFY FOUR ‘SUB-PERIODS’ FROM THE EIGHTIES TO PRESENT-DAY: 1. BEFORE THE 80S’: AN ELITARIAN SYSTEM CHARACTERIZED BY LOW PARTICIPATION RATES. 2. BETWEEN MID AND LATE 80S’: A TREND TOWARDS THE ENLARGEMENT OF PARTICIPATION ASSOCIATED TO A GREATER COMPETITION. 3. 1990-1999: A FUTHER STEP TOWARDS A COMPETITIVE MODEL BASED ON THE MARKET-ORIENTED SYSTEM. 4. FROM 2000 TO TODAY: A CONTINUOUS EVOLUTION TOWARDS A MORE COMPETITIVE MODEL BASED ON THE MARKET-ORIENTED SYSTEM TOGETHER WITH A GROWING ATTENTION TO STATE CONTROL FOR SOCIAL AND ECONOMIC DEVELOPMENT OF THE NATION. AT PRESENT THE GOVERNMENT OF NEW ZEALAND OPERATES TO STRENGHTHEN THIS PROCESS, PRIMARILY IN RELATION TO THE ROLE OF TERTIARY EDUCATION AS A STEADY FACTOR OF NATIONAL WALFARE, WHERE PROFESSIONAL DEVELOPMENT CONTRIBUTES ACTIVELY TO THE GROWTH OF THE NATIONAL ECONOMIC SYSTEM5. THE CASES OF ENGLAND AND NEW ZEALAND ARE THE FOCUS OF AN IN-DEPTH INVESTIGATION THAT STARTS FROM AN ANALYSIS OF THE POLICIES OF EACH NATION AND DEVELOP INTO A COMPARATIVE STUDY. AT THIS POINT I ATTEMPT TO DRAW SOME PRELIMINARY IMPRESSIONS ON THE FACTS ESSENTIALLY DECRIBED ABOVE. THE UNIVERSITY POLICIES IN ENGLAND AND NEW ZEALAND HAVE BOTH UNDERGONE A SIGNIFICANT REFORMATORY PROCESS SINCE THE EARLY EIGHTIES; IN BOTH CONTEXTS THE IMPORTANCE OF IDEAS THAT CONSTITUTED THE BASE OF POLITICS UNTIL 1980 WAS QUITE RELEVANT. GENERALLY SPEAKING, IN BOTH CASES THE PRE-REFORM POLICIES WERE INSPIRED BY EGALITARIANISM AND EXPANSION OF THE STUDENT POPULATION WHILE THOSE BROUGHT IN BY THE REFORM WOULD PURSUE EFFICIENCY, QUALITY AND COMPETITIVENESS. UNDOUBTEDLY, IN LINE WITH THIS GENERAL TENDENCY THAT REFLECTS THE HYPOTHESIS PROPOSED, THE TWO UNIVERSITY SYSTEMS PRESENT SEVERAL DIFFERENCES. THE UNIVERSITY SYSTEM IN NEW ZEALAND PROCEEDED STEADILY TOWARDS THE IMPLEMENTATION OF A MANAGERIAL CONCEPTION OF TERTIARY EDUCATION, ESPECIALLY FROM 1996 ONWARDS, IN ACCORDANCE WITH THE REFORMATORY PROCESS OF THE WHOLE PUBLIC SECTOR. IN THE UNITED KINGDOM, AS IN THE REST OF EUROPE, THE NEW APPROACH TO UNIVERSITY POLICY-MAKING HAD TO CONFRONT A DEEP-ROOTED TRADITION OF PROGRESSIVE EDUCATION AND THE IDEA OF EDUCATION EXPANSION THAT IN FACT DOMINATED UNTIL THE EIGHTIES. FROM THIS VIEW POINT THE GOVERNING ACTION OF MARGARET THATCHER GAVE RISE TO A RADICAL CHANGE THAT REVOLUTIONIZED THE OBJECTIVES AND KEY VALUES OF THE WHOLE EDUCATIONAL SYSTEM, IN PARTICULAR IN THE HIGHER EDUCATION SECTOR. IDEAS AS EFFICIENCY, EXCELLENCE AND CONTROL OF THE PERFORMANCE BECAME DECISIVE. THE LABOUR CABINETS OF BLAIR DEVELOPED IN THE WAKE OF CONSERVATIVE REFORMS. THIS APPEARS TO BE A FOCAL POINT OF THIS STUDY THAT OBSERVES HOW ALSO IN NEW ZEALAND THE REFORMING PROCESS OCCURRED TRANSVERSELY DURING PROGRESSIVE AND CONSERVATIVE ADMINISTRATIONS. THE PRELIMINARY IMPRESSION IS THEREFORE THAT IDEAS DEEPLY MARK THE REFORMATIVE PROCESSES: THE AIM OF MY RESEARCH IS TO VERIFY TO WHICH EXTENT THIS STATEMENT IS TRUE. IN ORDER TO BUILD A COMPREHENSIVE ANALYLIS, FURTHER SIGNIFICANT FACTORS WILL HAVE TO BE INVESTIGATED: THE WAY IDEAS ARE PERCEIVED AND IMPLEMENTED BY THE DIFFERENT POLITICAL ELITES; HOW THE VARIOUS SOCIOECONOMIC CONTEXTS INFLUENCE THE REFORMATIVE PROCESS; HOW THE INSTITUTIONAL STRUCTURES CONDITION THE POLICY-MAKING PROCESSES; WHETHER INDIVIDUAL INTERESTS PLAY A ROLE AND, IF YES, TO WHICH EXTENT.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The diagnosis, grading and classification of tumours has benefited considerably from the development of DCE-MRI which is now essential to the adequate clinical management of many tumour types due to its capability in detecting active angiogenesis. Several strategies have been proposed for DCE-MRI evaluation. Visual inspection of contrast agent concentration curves vs time is a very simple yet operator dependent procedure, therefore more objective approaches have been developed in order to facilitate comparison between studies. In so called model free approaches, descriptive or heuristic information extracted from time series raw data have been used for tissue classification. The main issue concerning these schemes is that they have not a direct interpretation in terms of physiological properties of the tissues. On the other hand, model based investigations typically involve compartmental tracer kinetic modelling and pixel-by-pixel estimation of kinetic parameters via non-linear regression applied on region of interests opportunely selected by the physician. This approach has the advantage to provide parameters directly related to the pathophysiological properties of the tissue such as vessel permeability, local regional blood flow, extraction fraction, concentration gradient between plasma and extravascular-extracellular space. Anyway, nonlinear modelling is computational demanding and the accuracy of the estimates can be affected by the signal-to-noise ratio and by the initial solutions. The principal aim of this thesis is investigate the use of semi-quantitative and quantitative parameters for segmentation and classification of breast lesion. The objectives can be subdivided as follow: describe the principal techniques to evaluate time intensity curve in DCE-MRI with focus on kinetic model proposed in literature; to evaluate the influence in parametrization choice for a classic bi-compartmental kinetic models; to evaluate the performance of a method for simultaneous tracer kinetic modelling and pixel classification; to evaluate performance of machine learning techniques training for segmentation and classification of breast lesion.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Decision trees have been proposed as a basis for modifying table based injection to reduce transient particulate spikes during the turbocharger lag period. It has been shown that decision trees can detect particulate spikes in real time. In well calibrated electronically controlled diesel engines these spikes are narrow and are encompassed by a wider NOx spike. Decision trees have been shown to pinpoint the exact location of measured opacity spikes in real time thus enabling targeted PM reduction with near zero NOx penalty. A calibrated dimensional model has been used to demonstrate the possible reduction of particulate matter with targeted injection pressure pulses. Post injection strategy optimized for near stoichiometric combustion has been shown to provide additional benefits. Empirical models have been used to calculate emission tradeoffs over the entire FTP cycle. An empirical model based transient calibration has been used to demonstrate that such targeted transient modifiers are more beneficial at lower engine-out NOx levels.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Indoor radon is regularly measured in Switzerland. However, a nationwide model to predict residential radon levels has not been developed. The aim of this study was to develop a prediction model to assess indoor radon concentrations in Switzerland. The model was based on 44,631 measurements from the nationwide Swiss radon database collected between 1994 and 2004. Of these, 80% randomly selected measurements were used for model development and the remaining 20% for an independent model validation. A multivariable log-linear regression model was fitted and relevant predictors selected according to evidence from the literature, the adjusted R², the Akaike's information criterion (AIC), and the Bayesian information criterion (BIC). The prediction model was evaluated by calculating Spearman rank correlation between measured and predicted values. Additionally, the predicted values were categorised into three categories (50th, 50th-90th and 90th percentile) and compared with measured categories using a weighted Kappa statistic. The most relevant predictors for indoor radon levels were tectonic units and year of construction of the building, followed by soil texture, degree of urbanisation, floor of the building where the measurement was taken and housing type (P-values <0.001 for all). Mean predicted radon values (geometric mean) were 66 Bq/m³ (interquartile range 40-111 Bq/m³) in the lowest exposure category, 126 Bq/m³ (69-215 Bq/m³) in the medium category, and 219 Bq/m³ (108-427 Bq/m³) in the highest category. Spearman correlation between predictions and measurements was 0.45 (95%-CI: 0.44; 0.46) for the development dataset and 0.44 (95%-CI: 0.42; 0.46) for the validation dataset. Kappa coefficients were 0.31 for the development and 0.30 for the validation dataset, respectively. The model explained 20% overall variability (adjusted R²). In conclusion, this residential radon prediction model, based on a large number of measurements, was demonstrated to be robust through validation with an independent dataset. The model is appropriate for predicting radon level exposure of the Swiss population in epidemiological research. Nevertheless, some exposure misclassification and regression to the mean is unavoidable and should be taken into account in future applications of the model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Model-based calibration of steady-state engine operation is commonly performed with highly parameterized empirical models that are accurate but not very robust, particularly when predicting highly nonlinear responses such as diesel smoke emissions. To address this problem, and to boost the accuracy of more robust non-parametric methods to the same level, GT-Power was used to transform the empirical model input space into multiple input spaces that simplified the input-output relationship and improved the accuracy and robustness of smoke predictions made by three commonly used empirical modeling methods: Multivariate Regression, Neural Networks and the k-Nearest Neighbor method. The availability of multiple input spaces allowed the development of two committee techniques: a 'Simple Committee' technique that used averaged predictions from a set of 10 pre-selected input spaces chosen by the training data and the "Minimum Variance Committee" technique where the input spaces for each prediction were chosen on the basis of disagreement between the three modeling methods. This latter technique equalized the performance of the three modeling methods. The successively increasing improvements resulting from the use of a single best transformed input space (Best Combination Technique), Simple Committee Technique and Minimum Variance Committee Technique were verified with hypothesis testing. The transformed input spaces were also shown to improve outlier detection and to improve k-Nearest Neighbor performance when predicting dynamic emissions with steady-state training data. An unexpected finding was that the benefits of input space transformation were unaffected by changes in the hardware or the calibration of the underlying GT-Power model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The quantification of the structural properties of snow is traditionally based on model-based stereology. Model-based stereology requires assumptions about the shape of the investigated structure. Here, we show how the density, specific surface area, and grain boundary area can be measured using a design-based method, where no assumptions about structural properties are necessary. The stereological results were also compared to X-ray tomography to control the accuracy of the method. The specific surface area calculated with the stereological method was 19.8 ± 12.3% smaller than with X-ray tomography. For the density, the stereological method gave results that were 11.7 ± 12.1% larger than X-ray tomography. The statistical analysis of the estimates confirmed that the stereological method and the sampling used are accurate. This stereological method was successfully tested on artificially produced ice beads but also on several snow types. Combining stereology and polarisation microscopy provides a good estimate of grain boundary areas in ice beads and in natural snow, with some limitatio

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Professor Sir David R. Cox (DRC) is widely acknowledged as among the most important scientists of the second half of the twentieth century. He inherited the mantle of statistical science from Pearson and Fisher, advanced their ideas, and translated statistical theory into practice so as to forever change the application of statistics in many fields, but especially biology and medicine. The logistic and proportional hazards models he substantially developed, are arguably among the most influential biostatistical methods in current practice. This paper looks forward over the period from DRC's 80th to 90th birthdays, to speculate about the future of biostatistics, drawing lessons from DRC's contributions along the way. We consider "Cox's model" of biostatistics, an approach to statistical science that: formulates scientific questions or quantities in terms of parameters gamma in probability models f(y; gamma) that represent in a parsimonious fashion, the underlying scientific mechanisms (Cox, 1997); partition the parameters gamma = theta, eta into a subset of interest theta and other "nuisance parameters" eta necessary to complete the probability distribution (Cox and Hinkley, 1974); develops methods of inference about the scientific quantities that depend as little as possible upon the nuisance parameters (Barndorff-Nielsen and Cox, 1989); and thinks critically about the appropriate conditional distribution on which to base infrences. We briefly review exciting biomedical and public health challenges that are capable of driving statistical developments in the next decade. We discuss the statistical models and model-based inferences central to the CM approach, contrasting them with computationally-intensive strategies for prediction and inference advocated by Breiman and others (e.g. Breiman, 2001) and to more traditional design-based methods of inference (Fisher, 1935). We discuss the hierarchical (multi-level) model as an example of the future challanges and opportunities for model-based inference. We then consider the role of conditional inference, a second key element of the CM. Recent examples from genetics are used to illustrate these ideas. Finally, the paper examines causal inference and statistical computing, two other topics we believe will be central to biostatistics research and practice in the coming decade. Throughout the paper, we attempt to indicate how DRC's work and the "Cox Model" have set a standard of excellence to which all can aspire in the future.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Submicroscopic changes in chromosomal DNA copy number dosage are common and have been implicated in many heritable diseases and cancers. Recent high-throughput technologies have a resolution that permits the detection of segmental changes in DNA copy number that span thousands of basepairs across the genome. Genome-wide association studies (GWAS) may simultaneously screen for copy number-phenotype and SNP-phenotype associations as part of the analytic strategy. However, genome-wide array analyses are particularly susceptible to batch effects as the logistics of preparing DNA and processing thousands of arrays often involves multiple laboratories and technicians, or changes over calendar time to the reagents and laboratory equipment. Failure to adjust for batch effects can lead to incorrect inference and requires inefficient post-hoc quality control procedures that exclude regions that are associated with batch. Our work extends previous model-based approaches for copy number estimation by explicitly modeling batch effects and using shrinkage to improve locus-specific estimates of copy number uncertainty. Key features of this approach include the use of diallelic genotype calls from experimental data to estimate batch- and locus-specific parameters of background and signal without the requirement of training data. We illustrate these ideas using a study of bipolar disease and a study of chromosome 21 trisomy. The former has batch effects that dominate much of the observed variation in quantile-normalized intensities, while the latter illustrates the robustness of our approach to datasets where as many as 25% of the samples have altered copy number. Locus-specific estimates of copy number can be plotted on the copy-number scale to investigate mosaicism and guide the choice of appropriate downstream approaches for smoothing the copy number as a function of physical position. The software is open source and implemented in the R package CRLMM available at Bioconductor (http:www.bioconductor.org).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this study is to develop statistical methodology to facilitate indirect estimation of the concentration of antiretroviral drugs and viral loads in the prostate gland and the seminal vesicle. The differences in antiretroviral drug concentrations in these organs may lead to suboptimal concentrations in one gland compared to the other. Suboptimal levels of the antiretroviral drugs will not be able to fully suppress the virus in that gland, lead to a source of sexually transmissible virus and increase the chance of selecting for drug resistant virus. This information may be useful selecting antiretroviral drug regimen that will achieve optimal concentrations in most of male genital tract glands. Using fractionally collected semen ejaculates, Lundquist (1949) measured levels of surrogate markers in each fraction that are uniquely produced by specific male accessory glands. To determine the original glandular concentrations of the surrogate markers, Lundquist solved a simultaneous series of linear equations. This method has several limitations. In particular, it does not yield a unique solution, it does not address measurement error, and it disregards inter-subject variability in the parameters. To cope with these limitations, we developed a mechanistic latent variable model based on the physiology of the male genital tract and surrogate markers. We employ a Bayesian approach and perform a sensitivity analysis with regard to the distributional assumptions on the random effects and priors. The model and Bayesian approach is validated on experimental data where the concentration of a drug should be (biologically) differentially distributed between the two glands. In this example, the Bayesian model-based conclusions are found to be robust to model specification and this hierarchical approach leads to more scientifically valid conclusions than the original methodology. In particular, unlike existing methods, the proposed model based approach was not affected by a common form of outliers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Many HIV-infected patients on highly active antiretroviral therapy (HAART) experience metabolic complications including dyslipidaemia and insulin resistance, which may increase their coronary heart disease (CHD) risk. We developed a prognostic model for CHD tailored to the changes in risk factors observed in patients starting HAART. METHODS: Data from five cohort studies (British Regional Heart Study, Caerphilly and Speedwell Studies, Framingham Offspring Study, Whitehall II) on 13,100 men aged 40-70 and 114,443 years of follow up were used. CHD was defined as myocardial infarction or death from CHD. Model fit was assessed using the Akaike Information Criterion; generalizability across cohorts was examined using internal-external cross-validation. RESULTS: A parametric model based on the Gompertz distribution generalized best. Variables included in the model were systolic blood pressure, total cholesterol, high-density lipoprotein cholesterol, triglyceride, glucose, diabetes mellitus, body mass index and smoking status. Compared with patients not on HAART, the estimated CHD hazard ratio (HR) for patients on HAART was 1.46 (95% CI 1.15-1.86) for moderate and 2.48 (95% CI 1.76-3.51) for severe metabolic complications. CONCLUSIONS: The change in the risk of CHD in HIV-infected men starting HAART can be estimated based on typical changes in risk factors, assuming that HRs estimated using data from non-infected men are applicable to HIV-infected men. Based on this model the risk of CHD is likely to increase, but increases may often be modest, and could be offset by lifestyle changes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This report presents the development of a Stochastic Knock Detection (SKD) method for combustion knock detection in a spark-ignition engine using a model based design approach. Knock Signal Simulator (KSS) was developed as the plant model for the engine. The KSS as the plant model for the engine generates cycle-to-cycle accelerometer knock intensities following a stochastic approach with intensities that are generated using a Monte Carlo method from a lognormal distribution whose parameters have been predetermined from engine tests and dependent upon spark-timing, engine speed and load. The lognormal distribution has been shown to be a good approximation to the distribution of measured knock intensities over a range of engine conditions and spark-timings for multiple engines in previous studies. The SKD method is implemented in Knock Detection Module (KDM) which processes the knock intensities generated by KSS with a stochastic distribution estimation algorithm and outputs estimates of high and low knock intensity levels which characterize knock and reference level respectively. These estimates are then used to determine a knock factor which provides quantitative measure of knock level and can be used as a feedback signal to control engine knock. The knock factor is analyzed and compared with a traditional knock detection method to detect engine knock under various engine operating conditions. To verify the effectiveness of the SKD method, a knock controller was also developed and tested in a model-in-loop (MIL) system. The objective of the knock controller is to allow the engine to operate as close as possible to its border-line spark-timing without significant engine knock. The controller parameters were tuned to minimize the cycle-to-cycle variation in spark timing and the settling time of the controller in responding to step increase in spark advance resulting in the onset of engine knock. The simulation results showed that the combined system can be used adequately to model engine knock and evaluated knock control strategies for a wide range of engine operating conditions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Ultra-high performance fiber reinforced concrete (UHPFRC) has arisen from the implementation of a variety of concrete engineering and materials science concepts developed over the last century. This material offers superior strength, serviceability, and durability over its conventional counterparts. One of the most important differences for UHPFRC over other concrete materials is its ability to resist fracture through the use of randomly dispersed discontinuous fibers and improvements to the fiber-matrix bond. Of particular interest is the materials ability to achieve higher loads after first crack, as well as its high fracture toughness. In this research, a study of the fracture behavior of UHPFRC with steel fibers was conducted to look at the effect of several parameters related to the fracture behavior and to develop a fracture model based on a non-linear curve fit of the data. To determine this, a series of three-point bending tests were performed on various single edge notched prisms (SENPs). Compression tests were also performed for quality assurance. Testing was conducted on specimens of different cross-sections, span/depth (S/D) ratios, curing regimes, ages, and fiber contents. By comparing the results from prisms of different sizes this study examines the weakening mechanism due to the size effect. Furthermore, by employing the concept of fracture energy it was possible to obtain a comparison of the fracture toughness and ductility. The model was determined based on a fit to P-w fracture curves, which was cross referenced for comparability to the results. Once obtained the model was then compared to the models proposed by the AFGC in the 2003 and to the ACI 544 model for conventional fiber reinforced concretes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Gene therapy has been recently introduced as a novel approach to treat ischemic tissues by using the angiogenic potential of certain growth factors. We investigated the effect of adenovirus-mediated gene therapy with transforming growth factor-beta (TGF-beta) delivered into the subdermal space to treat ischemically challenged epigastric skin flaps in a rat model. MATERIAL AND METHODS: A pilot study was conducted in a group of 5 animals pretreated with Ad-GFP and expression of green fluorescent protein in the skin flap sections was demonstrated under fluorescence microscopy at 2, 4, and 7 days after the treatment, indicating a successful transfection of the skin flaps following subdermal gene therapy. Next, 30 male Sprague Dawley rats were divided into 3 groups of 10 rats each. An epigastric skin flap model, based solely on the right inferior epigastric vessels, was used as the model in this study. Rats received subdermal injections of adenovirus encoding TGF-beta (Ad-TGF-beta) or green fluorescent protein (Ad-GFP) as treatment control. The third group (n = 10) received saline and served as a control group. A flap measuring 8 x 8 cm was outlined on the abdominal skin extending from the xiphoid process proximally and the pubic region distally, to the anterior axillary lines bilaterally. Just prior to flap elevation, the injections were given subdermally in the left upper corner of the flap. The flap was then sutured back to its bed. Flap viability was evaluated seven days after the initial operation. Digital images of the epigastric flaps were taken and areas of necrotic zones relative to total flap surface area were measured and expressed as percentages by using a software program. RESULTS: There was a significant increase in mean percent surviving area between the Ad-TGF-beta group and the two other control groups (P < 0.05). (Ad-TGF-beta: 90.3 +/- 4.0% versus Ad-GFP: 82.2 +/- 8.7% and saline group: 82.6 +/- 4.3%.) CONCLUSIONS: In this study, the authors were able to demonstrate that adenovirus-mediated gene therapy using TGF-beta ameliorated ischemic necrosis in an epigastric skin flap model, as confirmed by significant reduction in the necrotic zones of the flap. The results of this study raise the possibility of using adenovirus-mediated TGF-beta gene therapy to promote perfusion in random portion of skin flaps, especially in high-risk patients.