7 resultados para Logical Mathematical Structuration of Reality
em Digital Commons at Florida International University
Resumo:
Adaptability and invisibility are hallmarks of modern terrorism, and keeping pace with its dynamic nature presents a serious challenge for societies throughout the world. Innovations in computer science have incorporated applied mathematics to develop a wide array of predictive models to support the variety of approaches to counterterrorism. Predictive models are usually designed to forecast the location of attacks. Although this may protect individual structures or locations, it does not reduce the threat—it merely changes the target. While predictive models dedicated to events or social relationships receive much attention where the mathematical and social science communities intersect, models dedicated to terrorist locations such as safe-houses (rather than their targets or training sites) are rare and possibly nonexistent. At the time of this research, there were no publically available models designed to predict locations where violent extremists are likely to reside. This research uses France as a case study to present a complex systems model that incorporates multiple quantitative, qualitative and geospatial variables that differ in terms of scale, weight, and type. Though many of these variables are recognized by specialists in security studies, there remains controversy with respect to their relative importance, degree of interaction, and interdependence. Additionally, some of the variables proposed in this research are not generally recognized as drivers, yet they warrant examination based on their potential role within a complex system. This research tested multiple regression models and determined that geographically-weighted regression analysis produced the most accurate result to accommodate non-stationary coefficient behavior, demonstrating that geographic variables are critical to understanding and predicting the phenomenon of terrorism. This dissertation presents a flexible prototypical model that can be refined and applied to other regions to inform stakeholders such as policy-makers and law enforcement in their efforts to improve national security and enhance quality-of-life.
Resumo:
Limited literature regarding parameter estimation of dynamic systems has been identified as the central-most reason for not having parametric bounds in chaotic time series. However, literature suggests that a chaotic system displays a sensitive dependence on initial conditions, and our study reveals that the behavior of chaotic system: is also sensitive to changes in parameter values. Therefore, parameter estimation technique could make it possible to establish parametric bounds on a nonlinear dynamic system underlying a given time series, which in turn can improve predictability. By extracting the relationship between parametric bounds and predictability, we implemented chaos-based models for improving prediction in time series. ^ This study describes work done to establish bounds on a set of unknown parameters. Our research results reveal that by establishing parametric bounds, it is possible to improve the predictability of any time series, although the dynamics or the mathematical model of that series is not known apriori. In our attempt to improve the predictability of various time series, we have established the bounds for a set of unknown parameters. These are: (i) the embedding dimension to unfold a set of observation in the phase space, (ii) the time delay to use for a series, (iii) the number of neighborhood points to use for avoiding detection of false neighborhood and, (iv) the local polynomial to build numerical interpolation functions from one region to another. Using these bounds, we are able to get better predictability in chaotic time series than previously reported. In addition, the developments of this dissertation can establish a theoretical framework to investigate predictability in time series from the system-dynamics point of view. ^ In closing, our procedure significantly reduces the computer resource usage, as the search method is refined and efficient. Finally, the uniqueness of our method lies in its ability to extract chaotic dynamics inherent in non-linear time series by observing its values. ^
Resumo:
A two-phase three-dimensional computational model of an intermediate temperature (120--190°C) proton exchange membrane (PEM) fuel cell is presented. This represents the first attempt to model PEM fuel cells employing intermediate temperature membranes, in this case, phosphoric acid doped polybenzimidazole (PBI). To date, mathematical modeling of PEM fuel cells has been restricted to low temperature operation, especially to those employing Nafion ® membranes; while research on PBI as an intermediate temperature membrane has been solely at the experimental level. This work is an advancement in the state of the art of both these fields of research. With a growing trend toward higher temperature operation of PEM fuel cells, mathematical modeling of such systems is necessary to help hasten the development of the technology and highlight areas where research should be focused.^ This mathematical model accounted for all the major transport and polarization processes occurring inside the fuel cell, including the two phase phenomenon of gas dissolution in the polymer electrolyte. Results were presented for polarization performance, flux distributions, concentration variations in both the gaseous and aqueous phases, and temperature variations for various heat management strategies. The model predictions matched well with published experimental data, and were self-consistent.^ The major finding of this research was that, due to the transport limitations imposed by the use of phosphoric acid as a doping agent, namely low solubility and diffusivity of dissolved gases and anion adsorption onto catalyst sites, the catalyst utilization is very low (∼1--2%). Significant cost savings were predicted with the use of advanced catalyst deposition techniques that would greatly reduce the eventual thickness of the catalyst layer, and subsequently improve catalyst utilization. The model also predicted that an increase in power output in the order of 50% is expected if alternative doping agents to phosphoric acid can be found, which afford better transport properties of dissolved gases, reduced anion adsorption onto catalyst sites, and which maintain stability and conductive properties at elevated temperatures.^
Resumo:
Nitric Oxide (NO) is produced in the vascular endothelium where it then diffuses to the adjacent smooth muscle cells (SMC) activating agents known to regulate vascular tone. The close proximity of the site of NO production to the red blood cells (RBC) and its known fast consumption by hemoglobin, suggests that the blood will scavenge most of the NO produced. Therefore, it is unclear how NO is able to play its role in accomplishing vasodilation. Investigation of NO production and consumption rates will allow insight into this paradox. DAF-FM is a sensitive NO fluorescence probe widely used for qualitative assessment of cellular NO production. With the aid of a mathematical model of NO/DAF-FM reaction kinetics, experimental studies were conducted to calibrate the fluorescence signal showing that the slope of fluorescent intensity is proportional to [NO]2 and exhibits a saturation dependence on [DAF-FM]. In addition, experimental data exhibited a Km dependence on [NO]. This finding was incorporated into the model elucidating NO 2 as the possible activating agent of DAF-FM. A calibration procedure was formed and applied to agonist stimulated cells, providing an estimated NO release rate of 0.418 ± 0.18 pmol/cm2s. To assess NO consumption by RBCs, measurements of the rate of NO consumption in a gas stream flowing on top of an RBC solution of specified Hematocrit (Hct) was performed. The consumption rate constant (kbl)in porcine RBCs at 25°C and 45% Hct was estimated to be 3500 + 700 s-1. kbl is highly dependent on Hct and can reach up to 9900 + 4000 s-1 for 60% Hct. The nonlinear dependence of kbl on Hct suggests a predominant role for extracellular diffusion in limiting NO uptake. Further simulations showed a linear relationship between varying NO production rates and NO availability in the SMCs utilizing the estimated NO consumption rate. The corresponding SMC [NO] level for the average NO production rate estimated was approximately 15.1 nM. With the aid of experimental and theoretical methods we were able to examine the NO paradox and exhibit that endothelial derived NO is able to escape scavenging by RBCs to diffuse to the SMCs.
Resumo:
Math literacy is imperative to succeed in society. Experience is key for acquiring math literacy. A preschooler's world is full of mathematical experiences. Children are continually counting, sorting and comparing as they play. As children are engaged in these activities they are using language as a tool to express their mathematical thinking. If teachers are aware of these teachable moments and help children bridge their daily experiences to mathematical concepts, math literacy may be enhanced. This study described the interactions between teachers and preschoolers, determining the extent to which teachers scaffold children's everyday language into expressions of mathematical concepts. Of primary concern were the teachers' responsive interactions to children's expressions of an implicit mathematical utterance made while engaged in block play. The parallel mixed methods research design consisted of two strands. Strand 1 of the study focused on preschoolers' use of everyday language and the teachers' responses after a child made a mathematical utterance. Twelve teachers and 60 students were observed and videotaped while engaged in block play. Each teacher worked with five children for 20 minutes, yielding 240 minutes of observation. Interaction analysis was used to deductively analyze the recorded observations and field notes. Using a priori codes for the five mathematical concepts, it was found children produced 2,831 mathematical utterances. Teachers ignored 60% of these utterances and responded to, but did not mediate 30% of them. Only 10% of the mathematical utterances were mediated to a mathematical concept. Strand 2 focused on the teacher's view of the role of language in early childhood mathematics. The 12 teachers who had been observed as part of the first strand of the study were interviewed. Based on a thematic analysis of these interviews three themes emerged: (a) the importance of a child's environment, (b) the importance of an education in society, and (c) the role of math in early childhood. Finally, based on a meta-inference of both strands, three themes emerged: (a) teacher conception of math, (b) teacher practice, and (c) teacher sensitivity. Implications based on the findings involve policy, curriculum, and professional development.
Resumo:
Traditional Optics has provided ways to compensate some common visual limitations (up to second order visual impairments) through spectacles or contact lenses. Recent developments in wavefront science make it possible to obtain an accurate model of the Point Spread Function (PSF) of the human eye. Through what is known as the "Wavefront Aberration Function" of the human eye, exact knowledge of the optical aberration of the human eye is possible, allowing a mathematical model of the PSF to be obtained. This model could be used to pre-compensate (inverse-filter) the images displayed on computer screens in order to counter the distortion in the user's eye. This project takes advantage of the fact that the wavefront aberration function, commonly expressed as a Zernike polynomial, can be generated from the ophthalmic prescription used to fit spectacles to a person. This allows the pre-compensation, or onscreen deblurring, to be done for various visual impairments, up to second order (commonly known as myopia, hyperopia, or astigmatism). The technique proposed towards that goal and results obtained using a lens, for which the PSF is known, that is introduced into the visual path of subjects without visual impairment will be presented. In addition to substituting the effect of spectacles or contact lenses in correcting the loworder visual limitations of the viewer, the significance of this approach is that it has the potential to address higher-order abnormalities in the eye, currently not correctable by simple means.
Resumo:
The aim of this work was to develop a new methodology, which can be used to design new refrigerants that are better than the currently used refrigerants. The methodology draws some parallels with the general approach of computer aided molecular design. However, the mathematical way of representing the molecular structure of an organic compound and the use of meta models during the optimization process make it different. In essence, this approach aimed to generate molecules that conform to various property requirements that are known and specified a priori. A modified way of mathematically representing the molecular structure of an organic compound having up to four carbon atoms, along with atoms of other elements such as hydrogen, oxygen, fluorine, chlorine and bromine, was developed. The normal boiling temperature, enthalpy of vaporization, vapor pressure, tropospheric lifetime and biodegradability of 295 different organic compounds, were collected from open literature and data bases or estimated. Surrogate models linking the previously mentioned quantities with the molecular structure were developed. Constraints ensuring the generation of structurally feasible molecules were formulated and used in commercially available optimization algorithms to generate molecular structures of promising new refrigerants. This study was intended to serve as a proof-of-concept of designing refrigerants using the newly developed methodology.