935 resultados para [JEL:C5] Mathematical and Quantitative Methods - Econometric Modeling
Resumo:
O Oficial de Polícia formado pelo Instituto Superior de Ciências Policias e Segurança Interna (ISCPSI) é a figura essencial, enquanto futura chefia superior da Polícia de Segurança Pública (PSP), uma vez que cai sobre a sua responsabilidade levar a cabo que sejam cumpridos os objetivos da Instituição. Como tal, a formação ministrada no ISCPSI revela-se de uma elevada importância, pois permite dotar os seus Oficiais com os conhecimentos e as competências necessárias para o desempenho das suas funções. O presente estudo, baseado em métodos qualitativos e quantitativos, tem como objetivos, por um lado, compreender se existem diferenças ao nível da aquisição de conhecimentos e de competências dos Oficiais de Polícia da PSP que terminaram o Curso de Mestrado Integrado em Ciências Policiais (CMICP) na modalidade de internato em contrapartida com a modalidade de externato e, por outro lado, perceber se os custos associados ao regime de internato justificam a sua manutenção. Realizou-se um estudo de caso que incidiu nos três cursos do CMICP do ISCPSI, onde se realizaram entrevistas aos Diretores e Comandantes do Corpo de Alunos durante a formação dos três cursos e se aplicaram questionários a uma amostra significativa de formandos desses mesmos cursos que frequentaram, uns o regime de internato e outros o regime de externato. Como resultado da investigação verifica-se que na perspetiva dos inquiridos não existem diferenças entre os Oficiais de Polícia que concluíram o curso em regime de internato perante os que o concluíram em regime de externato, no que diz respeito à aquisição de conhecimentos e competências adquiridas. Nesta conformidade, não se justifica a manutenção do regime de internato porque não há perda de qualidade da formação recebida e, traduz-se um ganho significativo com os custos suportados com o regime de internato.
Resumo:
Since the beginning of the National Program for Production and Use of Biodiesel in Brazil, in 2004, different raw materials were evaluated for biodiesel production, trying to combine the agricultural diversity of the country to the desire to reduce production coasts. To determine the chemical composition of biodiesel produced from common vegetables oils, international methods have been used widely in Brazil. However, for analyzing biodiesel samples produced from some alternative raw materials analytical problems have been detected. That was the case of biodiesel from castor oil. Due the need to overcome these problems, new methodologies were developed using different chromatographic columns, standards and quantitative methods. The priority was simplifying the equipment configuration, realizing faster analyses, reducing the costs and facilitating the routine of biodiesel research and production laboratories. For quantifying free glycerin, the ethylene glycol was used in instead of 1,2,4-butanetriol, without loss of quality results. The ethylene glycol is a cheaper and easier standard. For methanol analyses the headspace was not used and the cost of the equipment used was lower. A detailed determination of the esters helped the deeper knowledge of the biodiesel composition. The report of the experiments and conclusions of the research that resulted in the development of alternative methods for quality control of the composition of the biodiesel produced in Brazil, a country with considerable variability of species in agriculture, are the goals of this thesis and are reported in the following pages
Resumo:
The aim of this thesis was to describe and explore how the partner relationship of patient–partner dyads isaffected following cardiac disease and, in particular, atrial fibrillation (AF) in one of the spouses. The thesis is based on four individual studies with different designs: descriptive (I), explorative (II, IV), and cross-sectional (III). Applied methods comprised a systematic review (I) and qualitative (II, IV) and quantitative methods (III). Participants in the studies were couples in which one of the spouses was afflicted with AF. Coherent with a systemic perspective, the research focused on the dyad as the unit of analysis. To identify and describe the current research position and knowledge base, the data for the systematic review were analyzed using an integrative approach. To explore couples’ main concern, interview data (n=12 couples) in study II were analyzed using classical grounded theory. Associations between patients and partners (n=91 couples) where analyzed through the Actor–Partner Interdependence Model using structural equation modelling (III). To explore couples’ illness beliefs, interview data (n=9 couples) in study IV were analyzed using Gadamerian hermeneutics. Study I revealed five themes of how the partner relationship is affected following cardiac disease: overprotection, communication deficiency, sexual concerns, changes in domestic roles, and adjustment to illness. Study II showed that couples living with AF experienced uncertainty as the common main concern, rooted in causation of AF and apprehension about AF episodes. The theory of Managing Uncertainty revealed the strategies of explicit sharing (mutual collaboration and finding resemblance) and implicit sharing (keeping distance and tacit understanding). Patients and spouses showed significant differences in terms of self-reported physical and mental health where patients rated themselves lower than spouses did (III). Several actor effects were identified, suggesting that emotional distress affects and is associated with perceived health. Patient partner effects and spouse partner effects were observed for vitality, indicating that higher levels of symptoms of depression in patients and spouses were associated with lower vitality in their partners. In study IV, couples’ core and secondary illness beliefs were revealed. From the core illness belief that “the heart is a representation of life,” two secondary illness beliefs were derived: AF is a threat to life, and AF can and must be explained. From the core illness belief that “change is an integral part of life,” two secondary illness beliefs were derived: AF is a disruption in our lives, and AF will not interfere with our lives. Finally, from the core illness belief that “adaptation is fundamental in life,” two secondary illness beliefs were derived: AF entails adjustment in daily life, and AF entails confidence in and adherence to professional care. In conclusion, the thesis result suggests that illness, in terms of cardiac disease and AF, affected and influenced the couple on aspects such as making sense of AF, responding to AF, and mutually incorporating and dealing with AF in their daily lives. In the light of this, the thesis results suggest that clinicians working with persons with AF and their partners should employ a systemic view with consideration of couple’s reciprocity and interdependence, but also have knowledge regarding AF, in terms of pathophysiology, the nature of AF (i.e., cause, consequences, and trajectory), and treatments. A possible approach to achieve this is a clinical utilization of an FSN based framework, such as the FamHC. Even if a formalized FSN framework is not utilized, partners should not be neglected but, rather, be considered a resource and be a part of clinical caring activities. This could be met by inviting partners to take part in rounds, treatment decisions, discharge calls or follow-up visits or other clinical caring activities. Likewise, interventional studies should include the couple as a unit of analysis as well as the target of interventions.
Resumo:
The purpose of this case study is to report on the use of learning journals as a strategy to encourage critical reflection in the field of graphic design. Very little empirical research has been published regarding the use of critical reflection in learning journals in this field. Furthermore, nothing has been documented at the college level. To that end, the goal of this research endeavor was to investigate whether second-year students in the NewMedia and Publication Design Program at a small Anglophone CEGEP in Québec, enrolled in a Page Layout and Design course, learn more deeply by reflecting in action during design projects or reflecting on action after completing design projects. Secondarily, indications of a possible change in self-efficacy were examined. Two hypotheses were posited: 1) reflection-on-action journaling will promote a deeper approach to learning than reflection-in-action journaling, and 2) the level of self-efficacy in graphic design improves as students are encouraged to think reflectively. Using both qualitative and quantitative methods, a mixed methods approach was used to collect and analyze the data. Content analysis of journal entries and interview responses was the primary method used to address the first hypothesis. Students were required to journal twice for each of three projects, once during the project and again one week after the project had been submitted. In addition, data regarding the students' perception of journaling was obtained through administering a survey and conducting interviews. For the second hypothesis, quantitative methods were used through the use of two surveys, one administered early in the Fall 2011 semester and the second administered early in the Winter 2012 semester. Supplementary data regarding self-efficacy was obtained in the form of content analysis of journal entries and interviews. Coded journal entries firmly supported the hypothesis that reflection-on-action journaling promotes deep learning. Using a taxonomy developed by Kember et al. (1999) wherein "critical reflection" is considered the highest level of reflection, it was found that only 5% of the coded responses in the reflection-in-action journals were deemed of the highest level, whereas 39% were considered critical reflection in the reflection-on-action journals. The findings from the interviews suggest that students had some initial concerns about the value of journaling, but these concerns were later dismissed as students learned that journaling was a valuable tool that helped them reflect and learn. All participants indicated that journaling changed their learning processes as they thought much more about what they were doing while they were doing it. They were taking the learning they had acquired and thinking about how they would apply it to new projects; this is critical reflection. The survey findings did not support the conclusive results of the comparison of journal instruments, where an increase of 35% in critical reflection was noted in the reflection-on-action journals. In Chapter 5, reasons for this incongruence are explored. Furthermore, based on the journals, surveys, and interviews, there is not enough evidence at this time to support the hypothesis that self-efficacy improves when students are encouraged to think reflectively. It could be hypothesized, however, that one's self-efficacy does not change in such a short period of time. In conclusion, the findings established in this case study make a practical contribution to the literature concerning the promotion of deep learning in the field of graphic design, as this researcher's hypothesis was supported that reflection-on-action journaling promoted deeper learning than reflection-in-action journaling. When examining the increases in critical reflection from reflection-in-action to the reflection-on-action journals, it was found that all students but one showed an increase in critical reflection in reflection-on-action journals. It is therefore recommended that production-oriented program instructors consider integrating reflection-on-action journaling into their courses where projects are given.
Resumo:
This study presents a review of new instruments for the impact assessment of libraries and a case study of the evaluation impact of the Library of the Faculty of Science, University of Porto (FCUP), from the students’ point of view. We con ducted a mixed methods research, i.e., which includes both qualitative data, to describe characteristics, in particular human actions, and quantitative data, represented by numbers that indicate exact amounts which can be statistically manipulated. Applying International Standard ISO16439:2014 (E) - Information and documentation - Methods and procedures for assessing the impact of libraries, we collected, 20 opinion texts from students of different nationalities, published in «Notícias da Biblioteca», from January 2013 to December 2014 and have conducted seven interviews.
Resumo:
This PhD thesis explores the ecological responses of bird species to glacial-interglacial transitions during the late Quaternary in the Western Palearctic, using multiple approaches and at different scales, enhancing the importance of the bird fossil record and quantitative methods to elucidate biotic trends in relation to long-term climate changes. The taxonomic and taphonomic analyses of the avian fossil assemblages from four Italian Middle and Upper Pleistocene sedimentary successions (Grotta del Cavallo, Grotta di Fumane, Grotta di Castelcivita, and Grotta di Uluzzo C) allowed us to reconstruct local-scale patterns in birds’ response to climate changes. These bird assemblages are characterized by the presence of temperate species and by the occasional presence of cold-dwelling species during glacials, related to range shifts. These local patterns are supported by those identified at the continental scale. In this respect, I mapped the present-day and LGM climatic envelopes of species with different climatic requirements. The results show a substantial stability in the range of temperate species and pronounced changes in the range of cold-dwelling species, supported by their fossil records. Therefore, the responses to climate oscillations are highly related to the thermal niches of investigated species. I also clarified the dynamics of the presence of boreal and arctic bird species in Mediterranean Europe, due to southern range shifts, during the glacial phases. After a reassessment of the reliability of the existing fossil evidence, I show that this phenomenon is not as common as previously thought, with important implications for the paleoclimatic and paleoenvironmental significance of the targeted species. I have also been able to explore the potential of multivariate and rarefaction methods in the analyses of avian fossils from Grotta del Cavallo. These approaches helped to delineate the main drivers of taphonomic damages and the dynamics of species diversity in relation to climate-driven paleoenvironmental changes.
Resumo:
Fin dalla sua attuazione nel 2012, l'Iniziativa dei cittadini europei (ICE) ha catturato l'attenzione di accademici e politici per il suo apparente potenziale come strumento di democrazia partecipativa in grado di promuovere il coinvolgimento diretto dei cittadini nel processo decisionale dell'UE. Tuttavia, dopo il suo lancio, questo strumento sembra aver deluso le speranze e le aspettative riposte in esso e, invece di fungere da ponte tra i cittadini e le istituzioni dell'UE, sembra essere diventato una chiara prova della leadership burocratica dell'UE a Bruxelles. Con la riforma della sua legislazione di attuazione, le istituzioni europee hanno voluto dare all'ICE un'altra possibilità di raggiungere il suo pieno potenziale di democratizzazione. A tre anni dall'entrata in vigore del nuovo Regolamento 2019/788 e a più di dieci anni dal suo inserimento nell'ordinamento giuridico europeo, riteniamo che sia il momento giusto per valutare il reale impatto dell'ICE nella promozione della democrazia partecipativa nell'UE. Per raggiungere questo obiettivo, la presente tesi di dottorato intraprende un'analisi completa di questa struttura di opportunità per la partecipazione dei cittadini, esplorando le sue origini, il suo quadro normativo, la sua applicazione pratica e le sue implicazioni per la democrazia europea attraverso un approccio interdisciplinare che combina l'uso di metodi sia qualitativi che quantitativi. Questa ricerca mira a fornire una comprensione più profonda e critica dell'ICE e del suo ruolo nella costruzione di un'Europa più partecipativa e più vicina ai cittadini.
Resumo:
Adaptability and invisibility are hallmarks of modern terrorism, and keeping pace with its dynamic nature presents a serious challenge for societies throughout the world. Innovations in computer science have incorporated applied mathematics to develop a wide array of predictive models to support the variety of approaches to counterterrorism. Predictive models are usually designed to forecast the location of attacks. Although this may protect individual structures or locations, it does not reduce the threat—it merely changes the target. While predictive models dedicated to events or social relationships receive much attention where the mathematical and social science communities intersect, models dedicated to terrorist locations such as safe-houses (rather than their targets or training sites) are rare and possibly nonexistent. At the time of this research, there were no publically available models designed to predict locations where violent extremists are likely to reside. This research uses France as a case study to present a complex systems model that incorporates multiple quantitative, qualitative and geospatial variables that differ in terms of scale, weight, and type. Though many of these variables are recognized by specialists in security studies, there remains controversy with respect to their relative importance, degree of interaction, and interdependence. Additionally, some of the variables proposed in this research are not generally recognized as drivers, yet they warrant examination based on their potential role within a complex system. This research tested multiple regression models and determined that geographically-weighted regression analysis produced the most accurate result to accommodate non-stationary coefficient behavior, demonstrating that geographic variables are critical to understanding and predicting the phenomenon of terrorism. This dissertation presents a flexible prototypical model that can be refined and applied to other regions to inform stakeholders such as policy-makers and law enforcement in their efforts to improve national security and enhance quality-of-life.
Resumo:
To test a mathematical model for measuring blinking kinematics. Spontaneous and reflex blinks of 23 healthy subjects were recorded with two different temporal resolutions. A magnetic search coil was used to record 77 blinks sampled at 200 Hz and 2 kHz in 13 subjects. A video system with low temporal resolution (30 Hz) was employed to register 60 blinks of 10 other subjects. The experimental data points were fitted with a model that assumes that the upper eyelid movement can be divided into two parts: an impulsive accelerated motion followed by a damped harmonic oscillation. All spontaneous and reflex blinks, including those recorded with low resolution, were well fitted by the model with a median coefficient of determination of 0.990. No significant difference was observed when the parameters of the blinks were estimated with the under-damped or critically damped solutions of the harmonic oscillator. On the other hand, the over-damped solution was not applicable to fit any movement. There was good agreement between the model and numerical estimation of the amplitude but not of maximum velocity. Spontaneous and reflex blinks can be mathematically described as consisting of two different phases. The down-phase is mainly an accelerated movement followed by a short time that represents the initial part of the damped harmonic oscillation. The latter is entirely responsible for the up-phase of the movement. Depending on the instantaneous characteristics of each movement, the under-damped or critically damped oscillation is better suited to describe the second phase of the blink. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Application of novel analytical and investigative methods such as fluorescence in situ hybridization, confocal laser scanning microscopy (CLSM), microelectrodes and advanced numerical simulation has led to new insights into micro-and macroscopic processes in bioreactors. However, the question is still open whether or not these new findings and the subsequent gain of knowledge are of significant practical relevance and if so, where and how. To find suitable answers it is necessary for engineers to know what can be expected by applying these modern analytical tools. Similarly, scientists could benefit significantly from an intensive dialogue with engineers in order to find out about practical problems and conditions existing in wastewater treatment systems. In this paper, an attempt is made to help bridge the gap between science and engineering in biological wastewater treatment. We provide an overview of recently developed methods in microbiology and in mathematical modeling and numerical simulation. A questionnaire is presented which may help generate a platform from which further technical and scientific developments can be accomplished. Both the paper and the questionnaire are aimed at encouraging scientists and engineers to enter into an intensive, mutually beneficial dialogue. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
Quantitative reverse-transcription polymerase chain reaction (qRT-PCR) is a standard assay in molecular medicine for gene expression analysis. Samples from incisional/needle biopsies, laser-microdissected tumor cells and other biologic sources, normally available in clinical cancer studies, generate very small amounts of RNA that are restrictive for expression analysis. As a consequence, an RNA amplification procedure is required to assess the gene expression levels of such sample types. The reproducibility and accuracy of relative gene expression data produced by sensitive methodology as qRT-PCR when cDNA converted from amplified (A) RNA is used as template has not yet been properly addressed. In this study, to properly evaluate this issue, we performed 1 round of linear RNA amplification in 2 breast cell lines (C5.2 and HB4a) and assessed the relative expression of 34 genes using cDNA converted from both nonamplified (NA) and A RNA. Relative gene expression was obtained from beta actin or glyceraldehyde 3-phosphate dehydrogenase normalized data using different dilutions of cDNA, wherein the variability and fold-change differences in the expression of the 2 methods were compared. Our data showed that 1 round of linear RNA amplification, even with suboptimal-quality RNA, is appropriate to generate reproducible and high-fidelity qRT-PCR relative expression data that have similar confidence levels as those from NA samples. The use of cDNA that is converted from both A and NA RNA in a single qRT-PCR experiment clearly creates bias in relative gene expression data.
Resumo:
The development of new statistical and computational methods is increasingly making it possible to bridge the gap between hard sciences and humanities. In this study, we propose an approach based on a quantitative evaluation of attributes of objects in fields of humanities, from which concepts such as dialectics and opposition are formally defined mathematically. As case studies, we analyzed the temporal evolution of classical music and philosophy by obtaining data for 8 features characterizing the corresponding fields for 7 well-known composers and philosophers, which were treated with multivariate statistics and pattern recognition methods. A bootstrap method was applied to avoid statistical bias caused by the small sample data set, with which hundreds of artificial composers and philosophers were generated, influenced by the 7 names originally chosen. Upon defining indices for opposition, skewness and counter-dialectics, we confirmed the intuitive analysis of historians in that classical music evolved according to a master apprentice tradition, while in philosophy changes were driven by opposition. Though these case studies were meant only to show the possibility of treating phenomena in humanities quantitatively, including a quantitative measure of concepts such as dialectics and opposition, the results are encouraging for further application of the approach presented here to many other areas, since it is entirely generic.
Resumo:
The objective of this research was to develop a high-fidelity dynamic model of a parafoilpayload system with respect to its application for the Ship Launched Aerial Delivery System (SLADS). SLADS is a concept in which cargo can be transfered from ship to shore using a parafoil-payload system. It is accomplished in two phases: An initial towing phase when the glider follows the towing vessel in a passive lift mode and an autonomous gliding phase when the system is guided to the desired point. While many previous researchers have analyzed the parafoil-payload system when it is released from another airborne vehicle, limited work has been done in the area of towing up the system from ground or sea. One of the main contributions of this research was the development of a nonlinear dynamic model of a towed parafoil-payload system. After performing an extensive literature review of the existing methods of modeling a parafoil-payload system, a five degree-of-freedom model was developed. The inertial and geometric properties of the system were investigated to predict accurate results in the simulation environment. Since extensive research has been done in determining the aerodynamic characteristics of a paraglider, an existing aerodynamic model was chosen to incorporate the effects of air flow around the flexible paraglider wing. During the towing phase, it is essential that the parafoil-payload system follow the line of the towing vessel path to prevent an unstable flight condition called ‘lockout’. A detailed study of the causes of lockout, its mathematical representation and the flight conditions and the parameters related to lockout, constitute another contribution of this work. A linearized model of the parafoil-payload system was developed and used to analyze the stability of the system about equilibrium conditions. The relationship between the control surface inputs and the stability was investigated. In addition to stability of flight, one more important objective of SLADS is to tow up the parafoil-payload system as fast as possible. The tension in the tow cable is directly proportional to the rate of ascent of the parafoil-payload system. Lockout instability is more favorable when tow tensions are large. Thus there is a tradeoff between susceptibility to lockout and rapid deployment. Control strategies were also developed for optimal tow up and to maintain stability in the event of disturbances.
Resumo:
Empirical evidence and theoretical studies suggest that the phenotype, i.e., cellular- and molecular-scale dynamics, including proliferation rate and adhesiveness due to microenvironmental factors and gene expression that govern tumor growth and invasiveness, also determine gross tumor-scale morphology. It has been difficult to quantify the relative effect of these links on disease progression and prognosis using conventional clinical and experimental methods and observables. As a result, successful individualized treatment of highly malignant and invasive cancers, such as glioblastoma, via surgical resection and chemotherapy cannot be offered and outcomes are generally poor. What is needed is a deterministic, quantifiable method to enable understanding of the connections between phenotype and tumor morphology. Here, we critically assess advantages and disadvantages of recent computational modeling efforts (e.g., continuum, discrete, and cellular automata models) that have pursued this understanding. Based on this assessment, we review a multiscale, i.e., from the molecular to the gross tumor scale, mathematical and computational "first-principle" approach based on mass conservation and other physical laws, such as employed in reaction-diffusion systems. Model variables describe known characteristics of tumor behavior, and parameters and functional relationships across scales are informed from in vitro, in vivo and ex vivo biology. We review the feasibility of this methodology that, once coupled to tumor imaging and tumor biopsy or cell culture data, should enable prediction of tumor growth and therapy outcome through quantification of the relation between the underlying dynamics and morphological characteristics. In particular, morphologic stability analysis of this mathematical model reveals that tumor cell patterning at the tumor-host interface is regulated by cell proliferation, adhesion and other phenotypic characteristics: histopathology information of tumor boundary can be inputted to the mathematical model and used as a phenotype-diagnostic tool to predict collective and individual tumor cell invasion of surrounding tissue. This approach further provides a means to deterministically test effects of novel and hypothetical therapy strategies on tumor behavior.