922 resultados para Signal detection Mathematical models
Resumo:
Mathematical models are increasingly used in environmental science thus increasing the importance of uncertainty and sensitivity analyses. In the present study, an iterative parameter estimation and identifiability analysis methodology is applied to an atmospheric model – the Operational Street Pollution Model (OSPMr). To assess the predictive validity of the model, the data is split into an estimation and a prediction data set using two data splitting approaches and data preparation techniques (clustering and outlier detection) are analysed. The sensitivity analysis, being part of the identifiability analysis, showed that some model parameters were significantly more sensitive than others. The application of the determined optimal parameter values was shown to succesfully equilibrate the model biases among the individual streets and species. It was as well shown that the frequentist approach applied for the uncertainty calculations underestimated the parameter uncertainties. The model parameter uncertainty was qualitatively assessed to be significant, and reduction strategies were identified.
Resumo:
Fire is a form of uncontrolled combustion which generates heat, smoke, toxic and irritant gases. All of these products are harmful to man and account for the heavy annual cost of 800 lives and £1,000,000,000 worth of property damage in Britain alone. The new discipline of Fire Safety Engineering has developed as a means of reducing these unacceptable losses. One of the main tools of Fire Safety Engineering is the mathematical model and over the past 15 years a number of mathematical models have emerged to cater for the needs of this discipline. Part of the difficulty faced by the Fire Safety Engineer is the selection of the most appropriate modelling tool to use for the job. To make an informed choice it is essential to have a good understanding of the various modelling approaches, their capabilities and limitations. In this paper some of the fundamental modelling tools used to predict fire and evacuation are investigated as are the issues associated with their use and recent developments in modelling technology.
Resumo:
Cognitive radio (CR) was developed for utilizing the spectrum bands efficiently. Spectrum sensing and awareness represent main tasks of a CR, providing the possibility of exploiting the unused bands. In this thesis, we investigate the detection and classification of Long Term Evolution (LTE) single carrier-frequency division multiple access (SC-FDMA) signals, which are used in uplink LTE, with applications to cognitive radio. We explore the second-order cyclostationarity of the LTE SC-FDMA signals, and apply results obtained for the cyclic autocorrelation function to signal detection and classification (in other words, to spectrum sensing and awareness). The proposed detection and classification algorithms provide a very good performance under various channel conditions, with a short observation time and at low signal-to-noise ratios, with reduced complexity. The validity of the proposed algorithms is verified using signals generated and acquired by laboratory instrumentation, and the experimental results show a good match with computer simulation results.
Resumo:
The analysis of steel and composite frames has traditionally been carried out by idealizing beam-to-column connections as either rigid or pinned. Although some advanced analysis methods have been proposed to account for semi-rigid connections, the performance of these methods strongly depends on the proper modeling of connection behavior. The primary challenge of modeling beam-to-column connections is their inelastic response and continuously varying stiffness, strength, and ductility. In this dissertation, two distinct approaches—mathematical models and informational models—are proposed to account for the complex hysteretic behavior of beam-to-column connections. The performance of the two approaches is examined and is then followed by a discussion of their merits and deficiencies. To capitalize on the merits of both mathematical and informational representations, a new approach, a hybrid modeling framework, is developed and demonstrated through modeling beam-to-column connections. Component-based modeling is a compromise spanning two extremes in the field of mathematical modeling: simplified global models and finite element models. In the component-based modeling of angle connections, the five critical components of excessive deformation are identified. Constitutive relationships of angles, column panel zones, and contact between angles and column flanges, are derived by using only material and geometric properties and theoretical mechanics considerations. Those of slip and bolt hole ovalization are simplified by empirically-suggested mathematical representation and expert opinions. A mathematical model is then assembled as a macro-element by combining rigid bars and springs that represent the constitutive relationship of components. Lastly, the moment-rotation curves of the mathematical models are compared with those of experimental tests. In the case of a top-and-seat angle connection with double web angles, a pinched hysteretic response is predicted quite well by complete mechanical models, which take advantage of only material and geometric properties. On the other hand, to exhibit the highly pinched behavior of a top-and-seat angle connection without web angles, a mathematical model requires components of slip and bolt hole ovalization, which are more amenable to informational modeling. An alternative method is informational modeling, which constitutes a fundamental shift from mathematical equations to data that contain the required information about underlying mechanics. The information is extracted from observed data and stored in neural networks. Two different training data sets, analytically-generated and experimental data, are tested to examine the performance of informational models. Both informational models show acceptable agreement with the moment-rotation curves of the experiments. Adding a degradation parameter improves the informational models when modeling highly pinched hysteretic behavior. However, informational models cannot represent the contribution of individual components and therefore do not provide an insight into the underlying mechanics of components. In this study, a new hybrid modeling framework is proposed. In the hybrid framework, a conventional mathematical model is complemented by the informational methods. The basic premise of the proposed hybrid methodology is that not all features of system response are amenable to mathematical modeling, hence considering informational alternatives. This may be because (i) the underlying theory is not available or not sufficiently developed, or (ii) the existing theory is too complex and therefore not suitable for modeling within building frame analysis. The role of informational methods is to model aspects that the mathematical model leaves out. Autoprogressive algorithm and self-learning simulation extract the missing aspects from a system response. In a hybrid framework, experimental data is an integral part of modeling, rather than being used strictly for validation processes. The potential of the hybrid methodology is illustrated through modeling complex hysteretic behavior of beam-to-column connections. Mechanics-based components of deformation such as angles, flange-plates, and column panel zone, are idealized to a mathematical model by using a complete mechanical approach. Although the mathematical model represents envelope curves in terms of initial stiffness and yielding strength, it is not capable of capturing the pinching effects. Pinching is caused mainly by separation between angles and column flanges as well as slip between angles/flange-plates and beam flanges. These components of deformation are suitable for informational modeling. Finally, the moment-rotation curves of the hybrid models are validated with those of the experimental tests. The comparison shows that the hybrid models are capable of representing the highly pinched hysteretic behavior of beam-to-column connections. In addition, the developed hybrid model is successfully used to predict the behavior of a newly-designed connection.
Resumo:
There are many different designs for audio amplifiers. Class-D, or switching, amplifiers generate their output signal in the form of a high-frequency square wave of variable duty cycle (ratio of on time to off time). The square-wave nature of the output allows a particularly efficient output stage, with minimal losses. The output is ultimately filtered to remove components of the spectrum above the audio range. Mathematical models are derived here for a variety of related class-D amplifier designs that use negative feedback. These models use an asymptotic expansion in powers of a small parameter related to the ratio of typical audio frequencies to the switching frequency to develop a power series for the output component in the audio spectrum. These models confirm that there is a form of distortion intrinsic to such amplifier designs. The models also explain why two approaches used commercially succeed in largely eliminating this distortion; a new means of overcoming the intrinsic distortion is revealed by the analysis. Copyright (2006) Society for Industrial and Applied Mathematics
Resumo:
Gas-liquid two-phase flow is very common in industrial applications, especially in the oil and gas, chemical, and nuclear industries. As operating conditions change such as the flow rates of the phases, the pipe diameter and physical properties of the fluids, different configurations called flow patterns take place. In the case of oil production, the most frequent pattern found is slug flow, in which continuous liquid plugs (liquid slugs) and gas-dominated regions (elongated bubbles) alternate. Offshore scenarios where the pipe lies onto the seabed with slight changes of direction are extremely common. With those scenarios and issues in mind, this work presents an experimental study of two-phase gas-liquid slug flows in a duct with a slight change of direction, represented by a horizontal section followed by a downward sloping pipe stretch. The experiments were carried out at NUEM (Núcleo de Escoamentos Multifásicos UTFPR). The flow initiated and developed under controlled conditions and their characteristic parameters were measured with resistive sensors installed at four pipe sections. Two high-speed cameras were also used. With the measured results, it was evaluated the influence of a slight direction change on the slug flow structures and on the transition between slug flow and stratified flow in the downward section.
Resumo:
Solar radiation takes in today's world, an increasing importance. Different devices are used to carry out spectral and integrated measurements of solar radiation. Thus the sensors can be divided into the fallow types: Calorimetric, Thermomechanical, Thermoelectric and Photoelectric. The first three categories are based on components converting the radiation to temperature (or heat) and then into electrical quantity. On the other hand, the photoelectric sensors are based on semiconductor or optoelectronic elements that when irradiated change their impedance or generate a measurable electric signal. The response function of the sensor element depends not only on the intensity of the radiation but also on its wavelengths. The radiation sensors most widely used fit in the first categories, but thanks to the reduction in manufacturing costs and to the increased integration of electronic systems, the use of the photoelectric-type sensors became more interesting. In this work we present a study of the behavior of different optoelectronic sensor elements. It is intended to verify the static response of the elements to the incident radiation. We study the optoelectronic elements using mathematical models that best fit their response as a function of wavelength. As an input to the model, the solar radiation values are generated with a radiative transfer model. We present a modeling of the spectral response sensors of other types in order to compare the behavior of optoelectronic elements with other sensors currently in use.
Resumo:
Filamentous fungi are a threat to the conservation of Cultural Heritage. Thus, detection and identification of viable filamentous fungi are crucial for applying adequate Safeguard measures. RNA-FISH protocols have been previously applied with this aim in Cultural Heritage samples. However, only hyphae detection was reported in the literature, even if spores and conidia are not only a potential risk to Cultural Heritage but can also be harmful for the health of visitors, curators and restorers. Thus, the aim of this work was to evaluate various permeabilizing strategies for their application in the detection of spores/conidia and hyphae of artworks’ biodeteriogenic filamentous fungi by RNA-FISH. Besides of this, the influence of cell aging on the success of the technique and on the development of fungal autofluorescence (that could hamper the RNA-FISH signal detection) were also investigated. Five common biodeteriogenic filamentous fungi species isolated from biodegradated artworks were used as biological model: Aspergillus niger, Cladosporium sp, Fusarium sp, Penicillium sp. and Exophialia sp. Fungal autofluorescence was only detected in cells harvested from Fusarium sp, and Exophialia sp. old cultures, being aging-dependent. However, it was weak enough to allow autofluorescence/RNA-FISH signals distinction. Thus, autofluorescence was not a limitation for the application of RNA-FISH for detection of the taxa investigated. All the permeabilization strategies tested allowed to detect fungal cells from young cultures by RNA-FISH. However, only the combination of paraformaldehyde with Triton X-100 allowed the detection of conidia/spores and hyphae of old filamentous fungi. All the permeabilization strategies failed in the Aspergillus niger conidia/spores staining, which are known to be particularly difficult to permeabilize. But, even in spite of this, the application of this permeabilization method increased the analytical potential of RNA FISH in Cultural Heritage biodeterioration. Whereas much work is required to validate this RNA-FISH approach for its application in real samples from Cultural Heritage it could represent an important advance for the detection, not only of hyphae but also of spores and conidia of various filamentous fungi taxa by RNA-FISH.
Resumo:
Artificial Intelligence (AI) and Machine Learning (ML) are novel data analysis techniques providing very accurate prediction results. They are widely adopted in a variety of industries to improve efficiency and decision-making, but they are also being used to develop intelligent systems. Their success grounds upon complex mathematical models, whose decisions and rationale are usually difficult to comprehend for human users to the point of being dubbed as black-boxes. This is particularly relevant in sensitive and highly regulated domains. To mitigate and possibly solve this issue, the Explainable AI (XAI) field became prominent in recent years. XAI consists of models and techniques to enable understanding of the intricated patterns discovered by black-box models. In this thesis, we consider model-agnostic XAI techniques, which can be applied to Tabular data, with a particular focus on the Credit Scoring domain. Special attention is dedicated to the LIME framework, for which we propose several modifications to the vanilla algorithm, in particular: a pair of complementary Stability Indices that accurately measure LIME stability, and the OptiLIME policy which helps the practitioner finding the proper balance among explanations' stability and reliability. We subsequently put forward GLEAMS a model-agnostic surrogate interpretable model which requires to be trained only once, while providing both Local and Global explanations of the black-box model. GLEAMS produces feature attributions and what-if scenarios, from both dataset and model perspective. Eventually, we argue that synthetic data are an emerging trend in AI, being more and more used to train complex models instead of original data. To be able to explain the outcomes of such models, we must guarantee that synthetic data are reliable enough to be able to translate their explanations to real-world individuals. To this end we propose DAISYnt, a suite of tests to measure synthetic tabular data quality and privacy.
Resumo:
Il primo modello matematico in grado di descrivere il prototipo di un sistema eccitabile assimilabile ad un neurone fu sviluppato da R. FitzHugh e J. Nagumo nel 1961. Tale modello, per quanto schematico, rappresenta un importante punto di partenza per la ricerca nell'ambito neuroscientifico delle dinamiche neuronali, ed è infatti capostipite di una serie di lavori che hanno puntato a migliorare l’accuratezza e la predicibilità dei modelli matematici per le scienze. L’elevato grado di complessità nello studio dei neuroni e delle dinamiche inter-neuronali comporta, tuttavia, che molte delle caratteristiche e delle potenzialità dell’ambito non siano ancora state comprese appieno. In questo lavoro verrà approfondito un modello ispirato al lavoro originale di FitzHugh e Nagumo. Tale modello presenta l’introduzione di un termine di self-coupling con ritardo temporale nel sistema di equazioni differenziali, diventa dunque rappresentativo di modelli di campo medio in grado di descrivere gli stati macroscopici di un ensemble di neuroni. L'introduzione del ritardo è funzionale ad una descrizione più realistica dei sistemi neuronali, e produce una dinamica più ricca e complessa rispetto a quella presente nella versione originale del modello. Sarà mostrata l'esistenza di una soluzione a ciclo limite nel modello che comprende il termine di ritardo temporale, ove tale soluzione non può essere interpretata nell’ambito delle biforcazioni di Hopf. Allo scopo di esplorare alcune delle caratteristiche basilari della modellizzazione del neurone, verrà principalmente utilizzata l’impostazione della teoria dei sistemi dinamici, integrando dove necessario con alcune nozioni provenienti dall’ambito fisiologico. In conclusione sarà riportata una sezione di approfondimento sulla integrazione numerica delle equazioni differenziali con ritardo.
Resumo:
THE PURPOSE OF THIS STUDY WAS TO PROPOSE A SPECIFIC LACTATE MINIMUM TEST FOR ELITE BASKETBALL PLAYERS CONSIDERING THE: Running Anaerobic Sprint Test (RAST) as a hyperlactatemia inductor, short distances (specific distance, 20 m) during progressive intensity and mathematical analysis to interpret aerobic and anaerobic variables. The basketball players were assigned to four groups: All positions (n=26), Guard (n= 7), Forward (n=11) and Center (n=8). The hyperlactatemia elevation (RAST) method consisted of 6 maximum sprints over 35 m separated by 10 s of recovery. The progressive phase of the lactate minimum test consisted of 5 stages controlled by an electronic metronome (8.0, 9.0, 10.0, 11.0 and 12.0 km/h) over a 20 m distance. The RAST variables and the lactate values were analyzed using visual and mathematical models. The intensity of the lactate minimum test, determined by a visual method, reduced in relation to polynomial fits (2nd degree) for the Small Forward positions and General groups. The Power and Fatigue Index values, determined by both methods, visual and 3rd degree polynomial, were not significantly different between the groups. In conclusion, the RAST is an excellent hyperlactatemia inductor and the progressive intensity of lactate minimum test using short distances (20 m) can be specifically used to evaluate the aerobic capacity of basketball players. In addition, no differences were observed between the visual and polynomial methods for RAST variables, but lactate minimum intensity was influenced by the method of analysis.
Resumo:
The post harvest cooling and/or freezing processes for horticultural products have been carried out with the objective of removing the heat from these products, allowing them a bigger period of conservation. Therefore, the knowledge of the physical properties that involve heat transference in the fig fruit Roxo de Valinhos is useful for calculating projects and systems of food engineering in general, as well as, for using in equations of thermodynamic mathematical models. The values of conductivity and thermal diffusivity of the whole fig fruit-rami index were determined, and from these values it was determined the value of the specific heat. For these determination it was used the transient method of the Line Heat Source. The results shown that the fig fruit has a thermal conductivity of 0.52 W m-1°C, thermal diffusivity of 1.56 x 10-7 m² s-1, pulp density of 815.6 kg m-3 and specific heat of 4.07 kJ kg-1 °C.
Resumo:
cDNA arrays are a powerful tool for discovering gene expression patterns. Nylon arrays have the advantage that they can be re-used several times. A key issue in high throughput gene expression analysis is sensitivity. In the case of nylon arrays, signal detection can be affected by the plastic bags used to keep membranes humid. In this study, we evaluated the effect of five types of plastics on the radioactive transmittance, number of genes with a signal above the background, and data variability. A polyethylene plastic bag 69 μm thick had a strong shielding effect that blocked 68.7% of the radioactive signal. The shielding effect on transmittance decreased the number of detected genes and increased the data variability. Other plastics which were thinner gave better results. Although plastics made from polyvinylidene chloride, polyvinyl chloride (both 13 μm thick) and polyethylene (29 and 7 μm thick) showed different levels of transmittance, they all gave similarly good performances. Polyvinylidene chloride and polyethylene 29 mm thick were the plastics of choice because of their easy handling. For other types of plastics, it is advisable to run a simple check on their performance in order to obtain the maximum information from nylon cDNA arrays.
Resumo:
Universidade Estadual de Campinas. Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas. Faculdade de Educação Física