930 resultados para Bridges Vibration Mathematical models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of a new set of frost property measurement techniques to be used in the control of frost growth and defrosting processes in refrigeration systems was investigated. Holographic interferometry and infrared thermometry were used to measure the temperature of the frost-air interface, while a beam element load sensor was used to obtain the weight of a deposited frost layer. The proposed measurement techniques were tested for the cases of natural and forced convection, and the characteristic charts were obtained for a set of operational conditions. ^ An improvement of existing frost growth mathematical models was also investigated. The early stage of frost nucleation was commonly not considered in these models and instead an initial value of layer thickness and porosity was regularly assumed. A nucleation model to obtain the droplet diameter and surface porosity at the end of the early frosting period was developed. The drop-wise early condensation in a cold flat plate under natural convection to a hot (room temperature) and humid air was modeled. A nucleation rate was found, and the relation of heat to mass transfer (Lewis number) was obtained. It was found that the Lewis number was much smaller than unity, which is the standard value usually assumed for most frosting numerical models. The nucleation model was validated against available experimental data for the early nucleation and full growth stages of the frosting process. ^ The combination of frost top temperature and weight variation signals can now be used to control the defrosting timing and the developed early nucleation model can now be used to simulate the entire process of frost growth in any surface material. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nel presente lavoro, ho studiato e trovato le soluzioni esatte di un modello matematico applicato ai recettori cellulari della famiglia delle integrine. Nel modello le integrine sono considerate come un sistema a due livelli, attivo e non attivo. Quando le integrine si trovano nello stato inattivo possono diffondere nella membrana, mentre quando si trovano nello stato attivo risultano cristallizzate nella membrana, incapaci di diffondere. La variazione di concentrazione nella superficie cellulare di una sostanza chiamata attivatore dà luogo all’attivazione delle integrine. Inoltre, questi eterodimeri possono legare una molecola inibitrice con funzioni di controllo e regolazione, che chiameremo v, la quale, legandosi al recettore, fa aumentare la produzione della sostanza attizzatrice, che chiameremo u. In questo modo si innesca un meccanismo di retroazione positiva. L’inibitore v regola il meccanismo di produzione di u, ed assume, pertanto, il ruolo di modulatore. Infatti, grazie a questo sistema di fine regolazione il meccanismo di feedback positivo è in grado di autolimitarsi. Si costruisce poi un modello di equazioni differenziali partendo dalle semplici reazioni chimiche coinvolte. Una volta che il sistema di equazioni è impostato, si possono desumere le soluzioni per le concentrazioni dell’inibitore e dell’attivatore per un caso particolare dei parametri. Infine, si può eseguire un test per vedere cosa predice il modello in termini di integrine. Per farlo, ho utilizzato un’attivazione del tipo funzione gradino e l’ho inserita nel sistema, valutando la dinamica dei recettori. Si ottiene in questo modo un risultato in accordo con le previsioni: le integrine legate si trovano soprattutto ai limiti della zona attivata, mentre le integrine libere vengono a mancare nella zona attivata.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In perifusion cell cultures, the culture medium flows continuously through a chamber containing immobilized cells and the effluent is collected at the end. In our main applications, gonadotropin releasing hormone (GnRH) or oxytocin is introduced into the chamber as the input. They stimulate the cells to secrete luteinizing hormone (LH), which is collected in the effluent. To relate the effluent LH concentration to the cellular processes producing it, we develop and analyze a mathematical model consisting of coupled partial differential equations describing the intracellular signaling and the movement of substances in the cell chamber. We analyze three different data sets and give cellular mechanisms that explain the data. Our model indicates that two negative feedback loops, one fast and one slow, are needed to explain the data and we give their biological bases. We demonstrate that different LH outcomes in oxytocin and GnRH stimulations might originate from different receptor dynamics. We analyze the model to understand the influence of parameters, like the rate of the medium flow or the fraction collection time, on the experimental outcomes. We investigate how the rate of binding and dissociation of the input hormone to and from its receptor influence its movement down the chamber. Finally, we formulate and analyze simpler models that allow us to predict the distortion of a square pulse due to hormone-receptor interactions and to estimate parameters using perifusion data. We show that in the limit of high binding and dissociation the square pulse moves as a diffusing Gaussian and in this limit the biological parameters can be estimated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Uncertainty quantification (UQ) is both an old and new concept. The current novelty lies in the interactions and synthesis of mathematical models, computer experiments, statistics, field/real experiments, and probability theory, with a particular emphasize on the large-scale simulations by computer models. The challenges not only come from the complication of scientific questions, but also from the size of the information. It is the focus in this thesis to provide statistical models that are scalable to massive data produced in computer experiments and real experiments, through fast and robust statistical inference.

Chapter 2 provides a practical approach for simultaneously emulating/approximating massive number of functions, with the application on hazard quantification of Soufri\`{e}re Hills volcano in Montserrate island. Chapter 3 discusses another problem with massive data, in which the number of observations of a function is large. An exact algorithm that is linear in time is developed for the problem of interpolation of Methylation levels. Chapter 4 and Chapter 5 are both about the robust inference of the models. Chapter 4 provides a new criteria robustness parameter estimation criteria and several ways of inference have been shown to satisfy such criteria. Chapter 5 develops a new prior that satisfies some more criteria and is thus proposed to use in practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fire is a form of uncontrolled combustion which generates heat, smoke, toxic and irritant gases. All of these products are harmful to man and account for the heavy annual cost of 800 lives and £1,000,000,000 worth of property damage in Britain alone. The new discipline of Fire Safety Engineering has developed as a means of reducing these unacceptable losses. One of the main tools of Fire Safety Engineering is the mathematical model and over the past 15 years a number of mathematical models have emerged to cater for the needs of this discipline. Part of the difficulty faced by the Fire Safety Engineer is the selection of the most appropriate modelling tool to use for the job. To make an informed choice it is essential to have a good understanding of the various modelling approaches, their capabilities and limitations. In this paper some of the fundamental modelling tools used to predict fire and evacuation are investigated as are the issues associated with their use and recent developments in modelling technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The analysis of steel and composite frames has traditionally been carried out by idealizing beam-to-column connections as either rigid or pinned. Although some advanced analysis methods have been proposed to account for semi-rigid connections, the performance of these methods strongly depends on the proper modeling of connection behavior. The primary challenge of modeling beam-to-column connections is their inelastic response and continuously varying stiffness, strength, and ductility. In this dissertation, two distinct approaches—mathematical models and informational models—are proposed to account for the complex hysteretic behavior of beam-to-column connections. The performance of the two approaches is examined and is then followed by a discussion of their merits and deficiencies. To capitalize on the merits of both mathematical and informational representations, a new approach, a hybrid modeling framework, is developed and demonstrated through modeling beam-to-column connections. Component-based modeling is a compromise spanning two extremes in the field of mathematical modeling: simplified global models and finite element models. In the component-based modeling of angle connections, the five critical components of excessive deformation are identified. Constitutive relationships of angles, column panel zones, and contact between angles and column flanges, are derived by using only material and geometric properties and theoretical mechanics considerations. Those of slip and bolt hole ovalization are simplified by empirically-suggested mathematical representation and expert opinions. A mathematical model is then assembled as a macro-element by combining rigid bars and springs that represent the constitutive relationship of components. Lastly, the moment-rotation curves of the mathematical models are compared with those of experimental tests. In the case of a top-and-seat angle connection with double web angles, a pinched hysteretic response is predicted quite well by complete mechanical models, which take advantage of only material and geometric properties. On the other hand, to exhibit the highly pinched behavior of a top-and-seat angle connection without web angles, a mathematical model requires components of slip and bolt hole ovalization, which are more amenable to informational modeling. An alternative method is informational modeling, which constitutes a fundamental shift from mathematical equations to data that contain the required information about underlying mechanics. The information is extracted from observed data and stored in neural networks. Two different training data sets, analytically-generated and experimental data, are tested to examine the performance of informational models. Both informational models show acceptable agreement with the moment-rotation curves of the experiments. Adding a degradation parameter improves the informational models when modeling highly pinched hysteretic behavior. However, informational models cannot represent the contribution of individual components and therefore do not provide an insight into the underlying mechanics of components. In this study, a new hybrid modeling framework is proposed. In the hybrid framework, a conventional mathematical model is complemented by the informational methods. The basic premise of the proposed hybrid methodology is that not all features of system response are amenable to mathematical modeling, hence considering informational alternatives. This may be because (i) the underlying theory is not available or not sufficiently developed, or (ii) the existing theory is too complex and therefore not suitable for modeling within building frame analysis. The role of informational methods is to model aspects that the mathematical model leaves out. Autoprogressive algorithm and self-learning simulation extract the missing aspects from a system response. In a hybrid framework, experimental data is an integral part of modeling, rather than being used strictly for validation processes. The potential of the hybrid methodology is illustrated through modeling complex hysteretic behavior of beam-to-column connections. Mechanics-based components of deformation such as angles, flange-plates, and column panel zone, are idealized to a mathematical model by using a complete mechanical approach. Although the mathematical model represents envelope curves in terms of initial stiffness and yielding strength, it is not capable of capturing the pinching effects. Pinching is caused mainly by separation between angles and column flanges as well as slip between angles/flange-plates and beam flanges. These components of deformation are suitable for informational modeling. Finally, the moment-rotation curves of the hybrid models are validated with those of the experimental tests. The comparison shows that the hybrid models are capable of representing the highly pinched hysteretic behavior of beam-to-column connections. In addition, the developed hybrid model is successfully used to predict the behavior of a newly-designed connection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, we investigate the role of applied physics in epidemiological surveillance through the application of mathematical models, network science and machine learning. The spread of a communicable disease depends on many biological, social, and health factors. The large masses of data available make it possible, on the one hand, to monitor the evolution and spread of pathogenic organisms; on the other hand, to study the behavior of people, their opinions and habits. Presented here are three lines of research in which an attempt was made to solve real epidemiological problems through data analysis and the use of statistical and mathematical models. In Chapter 1, we applied language-inspired Deep Learning models to transform influenza protein sequences into vectors encoding their information content. We then attempted to reconstruct the antigenic properties of different viral strains using regression models and to identify the mutations responsible for vaccine escape. In Chapter 2, we constructed a compartmental model to describe the spread of a bacterium within a hospital ward. The model was informed and validated on time series of clinical measurements, and a sensitivity analysis was used to assess the impact of different control measures. Finally (Chapter 3) we reconstructed the network of retweets among COVID-19 themed Twitter users in the early months of the SARS-CoV-2 pandemic. By means of community detection algorithms and centrality measures, we characterized users’ attention shifts in the network, showing that scientific communities, initially the most retweeted, lost influence over time to national political communities. In the Conclusion, we highlighted the importance of the work done in light of the main contemporary challenges for epidemiological surveillance. In particular, we present reflections on the importance of nowcasting and forecasting, the relationship between data and scientific research, and the need to unite the different scales of epidemiological surveillance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Artificial Intelligence (AI) and Machine Learning (ML) are novel data analysis techniques providing very accurate prediction results. They are widely adopted in a variety of industries to improve efficiency and decision-making, but they are also being used to develop intelligent systems. Their success grounds upon complex mathematical models, whose decisions and rationale are usually difficult to comprehend for human users to the point of being dubbed as black-boxes. This is particularly relevant in sensitive and highly regulated domains. To mitigate and possibly solve this issue, the Explainable AI (XAI) field became prominent in recent years. XAI consists of models and techniques to enable understanding of the intricated patterns discovered by black-box models. In this thesis, we consider model-agnostic XAI techniques, which can be applied to Tabular data, with a particular focus on the Credit Scoring domain. Special attention is dedicated to the LIME framework, for which we propose several modifications to the vanilla algorithm, in particular: a pair of complementary Stability Indices that accurately measure LIME stability, and the OptiLIME policy which helps the practitioner finding the proper balance among explanations' stability and reliability. We subsequently put forward GLEAMS a model-agnostic surrogate interpretable model which requires to be trained only once, while providing both Local and Global explanations of the black-box model. GLEAMS produces feature attributions and what-if scenarios, from both dataset and model perspective. Eventually, we argue that synthetic data are an emerging trend in AI, being more and more used to train complex models instead of original data. To be able to explain the outcomes of such models, we must guarantee that synthetic data are reliable enough to be able to translate their explanations to real-world individuals. To this end we propose DAISYnt, a suite of tests to measure synthetic tabular data quality and privacy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il primo modello matematico in grado di descrivere il prototipo di un sistema eccitabile assimilabile ad un neurone fu sviluppato da R. FitzHugh e J. Nagumo nel 1961. Tale modello, per quanto schematico, rappresenta un importante punto di partenza per la ricerca nell'ambito neuroscientifico delle dinamiche neuronali, ed è infatti capostipite di una serie di lavori che hanno puntato a migliorare l’accuratezza e la predicibilità dei modelli matematici per le scienze. L’elevato grado di complessità nello studio dei neuroni e delle dinamiche inter-neuronali comporta, tuttavia, che molte delle caratteristiche e delle potenzialità dell’ambito non siano ancora state comprese appieno. In questo lavoro verrà approfondito un modello ispirato al lavoro originale di FitzHugh e Nagumo. Tale modello presenta l’introduzione di un termine di self-coupling con ritardo temporale nel sistema di equazioni differenziali, diventa dunque rappresentativo di modelli di campo medio in grado di descrivere gli stati macroscopici di un ensemble di neuroni. L'introduzione del ritardo è funzionale ad una descrizione più realistica dei sistemi neuronali, e produce una dinamica più ricca e complessa rispetto a quella presente nella versione originale del modello. Sarà mostrata l'esistenza di una soluzione a ciclo limite nel modello che comprende il termine di ritardo temporale, ove tale soluzione non può essere interpretata nell’ambito delle biforcazioni di Hopf. Allo scopo di esplorare alcune delle caratteristiche basilari della modellizzazione del neurone, verrà principalmente utilizzata l’impostazione della teoria dei sistemi dinamici, integrando dove necessario con alcune nozioni provenienti dall’ambito fisiologico. In conclusione sarà riportata una sezione di approfondimento sulla integrazione numerica delle equazioni differenziali con ritardo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

THE PURPOSE OF THIS STUDY WAS TO PROPOSE A SPECIFIC LACTATE MINIMUM TEST FOR ELITE BASKETBALL PLAYERS CONSIDERING THE: Running Anaerobic Sprint Test (RAST) as a hyperlactatemia inductor, short distances (specific distance, 20 m) during progressive intensity and mathematical analysis to interpret aerobic and anaerobic variables. The basketball players were assigned to four groups: All positions (n=26), Guard (n= 7), Forward (n=11) and Center (n=8). The hyperlactatemia elevation (RAST) method consisted of 6 maximum sprints over 35 m separated by 10 s of recovery. The progressive phase of the lactate minimum test consisted of 5 stages controlled by an electronic metronome (8.0, 9.0, 10.0, 11.0 and 12.0 km/h) over a 20 m distance. The RAST variables and the lactate values were analyzed using visual and mathematical models. The intensity of the lactate minimum test, determined by a visual method, reduced in relation to polynomial fits (2nd degree) for the Small Forward positions and General groups. The Power and Fatigue Index values, determined by both methods, visual and 3rd degree polynomial, were not significantly different between the groups. In conclusion, the RAST is an excellent hyperlactatemia inductor and the progressive intensity of lactate minimum test using short distances (20 m) can be specifically used to evaluate the aerobic capacity of basketball players. In addition, no differences were observed between the visual and polynomial methods for RAST variables, but lactate minimum intensity was influenced by the method of analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The post harvest cooling and/or freezing processes for horticultural products have been carried out with the objective of removing the heat from these products, allowing them a bigger period of conservation. Therefore, the knowledge of the physical properties that involve heat transference in the fig fruit Roxo de Valinhos is useful for calculating projects and systems of food engineering in general, as well as, for using in equations of thermodynamic mathematical models. The values of conductivity and thermal diffusivity of the whole fig fruit-rami index were determined, and from these values it was determined the value of the specific heat. For these determination it was used the transient method of the Line Heat Source. The results shown that the fig fruit has a thermal conductivity of 0.52 W m-1°C, thermal diffusivity of 1.56 x 10-7 m² s-1, pulp density of 815.6 kg m-3 and specific heat of 4.07 kJ kg-1 °C.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Universidade Estadual de Campinas. Faculdade de Educação Física

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Universidade Estadual de Campinas. Faculdade de Educação Física

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJETIVO: Desenvolver um método e um dispositivo para quantificar a visão em candela (cd). Os estudos de medida da visão são importantes para todas as ciências visuais. MÉTODOS: É um estudo teórico e experimental. Foram descritos os detalhes do método psicofísico e da calibração do dispositivo. Foram realizados testes preliminares em voluntários. RESULTADOS: É um teste psicofísico simples e com resultado expresso em unidades do sistema internacional de medidas. Com a descrição técnica será possível reproduzir o experimento em outros centros de pesquisa. CONCLUSÃO: Os resultados aferidos em intensidade luminosa (cd) são uma opção para estudo visual. Esses resultados possibilitarão extrapolar medidas para modelos matemáticos e para simular efeitos individuais com dados aberrométricos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aimed to describe and compare the ventilation behavior during an incremental test utilizing three mathematical models and to compare the feature of ventilation curve fitted by the best mathematical model between aerobically trained (TR) and untrained ( UT) men. Thirty five subjects underwent a treadmill test with 1 km.h(-1) increases every minute until exhaustion. Ventilation averages of 20 seconds were plotted against time and fitted by: bi-segmental regression model (2SRM); three-segmental regression model (3SRM); and growth exponential model (GEM). Residual sum of squares (RSS) and mean square error (MSE) were calculated for each model. The correlations between peak VO2 (VO2PEAK), peak speed (Speed(PEAK)), ventilatory threshold identified by the best model (VT2SRM) and the first derivative calculated for workloads below (moderate intensity) and above (heavy intensity) VT2SRM were calculated. The RSS and MSE for GEM were significantly higher (p < 0.01) than for 2SRM and 3SRM in pooled data and in UT, but no significant difference was observed among the mathematical models in TR. In the pooled data, the first derivative of moderate intensities showed significant negative correlations with VT2SRM (r = -0.58; p < 0.01) and Speed(PEAK) (r = -0.46; p < 0.05) while the first derivative of heavy intensities showed significant negative correlation with VT2SRM (r = -0.43; p < 0.05). In UT group the first derivative of moderate intensities showed significant negative correlations with VT2SRM (r = -0.65; p < 0.05) and Speed(PEAK) (r = -0.61; p < 0.05), while the first derivative of heavy intensities showed significant negative correlation with VT2SRM (r= -0.73; p < 0.01), Speed(PEAK) (r = -0.73; p < 0.01) and VO2PEAK (r = -0.61; p < 0.05) in TR group. The ventilation behavior during incremental treadmill test tends to show only one threshold. UT subjects showed a slower ventilation increase during moderate intensities while TR subjects showed a slower ventilation increase during heavy intensities.