31 resultados para [JEL:C5] Mathematical and Quantitative Methods - Econometric Modeling
Resumo:
The reaction of localised C=C bonds on the surface of activated carbons has been shown to be an effective method of chemical modification especially using microwave-assisted reactions.
Resumo:
This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.
Resumo:
This study draws upon effectuation and causation as examples of planning-based and flexible decision-making logics, and investigates dynamics in the use of both logics. The study applies a longitudinal process research approach to investigate strategic decision-making in new venture creation over time. Combining qualitative and quantitative methods, we analyze 385 decision events across nine technology-based ventures. Our observations suggest a hybrid perspective on strategic decision-making, demonstrating how effectuation and causation logics are combined, and how entrepreneurs’ emphasis on these logics shifts and re-shifts over time. We induce a dynamic model which extends the literature on strategic decision-making in venture creation.
Resumo:
Purpose: To assess the inter and intra observer variability of subjective grading of the retinal arterio-venous ratio (AVR) using a visual grading and to compare the subjectively derived grades to an objective method using a semi-automated computer program. Methods: Following intraocular pressure and blood pressure measurements all subjects underwent dilated fundus photography. 86 monochromatic retinal images with the optic nerve head centred (52 healthy volunteers) were obtained using a Zeiss FF450+ fundus camera. Arterio-venous ratios (AVR), central retinal artery equivalent (CRAE) and central retinal vein equivalent (CRVE) were calculated on three separate occasions by one single observer semi-automatically using the software VesselMap (ImedosSystems, Jena, Germany). Following the automated grading, three examiners graded the AVR visually on three separate occasions in order to assess their agreement. Results: Reproducibility of the semi-automatic parameters was excellent (ICCs: 0.97 (CRAE); 0.985 (CRVE) and 0.952 (AVR)). However, visual grading of AVR showed inter grader differences as well as discrepancies between subjectively derived and objectively calculated AVR (all p < 0.000001). Conclusion: Grader education and experience leads to inter-grader differences but more importantly, subjective grading is not capable to pick up subtle differences across healthy individuals and does not represent true AVR when compared with an objective assessment method. Technology advancements mean we no longer rely on opthalmoscopic evaluation but can capture and store fundus images with retinal cameras, enabling us to measure vessel calibre more accurately compared to visual estimation; hence it should be integrated in optometric practise for improved accuracy and reliability of clinical assessments of retinal vessel calibres. © 2014 Spanish General Council of Optometry.
Resumo:
This article attempts to repair the neglect of the qualitative uses of some and to suggest an explanation which could cover the full range of usage with this determiner - both quantitative and qualitative - showing how a single underlying meaning, modulated by contextual and pragmatic factors, can give rise to the wide variety of messages expressed by some in actual usage. Both the treatment of some as an existential quantifier and the scalar model which views some as evoking a less-than-expected quantity on a pragmatic scale are shown to be incapable of handling the qualitative uses of this determiner. An original analysis of some and the interaction of its meaning with the defining features of the qualitative uses is proposed, extending the discussion as well to the role of focus and the adverbial modifier quite. The crucial semantic feature of some for the explanation of its capacity to express qualitative readings is argued to be non-identification of a referent assumed to be particular. Under the appropriate conditions, this notion can give rise to qualitative denigration (implying it is not even worth the bother to identify the referent) or qualitative appreciation (implying the referent to be so outstanding that it defies identification). The explanation put forward is also shown to cover some's use as an approximator, thereby enhancing its plausibility even further. © Cambridge University Press 2012.
Resumo:
Variationist sociolinguistics was one of the first branches of linguistics to adopt a quantitative approach to data analysis (e.g., Fischer, 1958; Labov, 1963, 1966, 1969; Wolfram, 1969).
Resumo:
Cloud computing is a new technological paradigm offering computing infrastructure, software and platforms as a pay-as-you-go, subscription-based service. Many potential customers of cloud services require essential cost assessments to be undertaken before transitioning to the cloud. Current assessment techniques are imprecise as they rely on simplified specifications of resource requirements that fail to account for probabilistic variations in usage. In this paper, we address these problems and propose a new probabilistic pattern modelling (PPM) approach to cloud costing and resource usage verification. Our approach is based on a concise expression of probabilistic resource usage patterns translated to Markov decision processes (MDPs). Key costing and usage queries are identified and expressed in a probabilistic variant of temporal logic and calculated to a high degree of precision using quantitative verification techniques. The PPM cost assessment approach has been implemented as a Java library and validated with a case study and scalability experiments. © 2012 Springer-Verlag Berlin Heidelberg.
Resumo:
The aim of this review was to quantify the global variation in childhood myopia prevalence over time taking account of demographic and study design factors. A systematic review identified population-based surveys with estimates of childhood myopia prevalence published by February 2015. Multilevel binomial logistic regression of log odds of myopia was used to examine the association with age, gender, urban versus rural setting and survey year, among populations of different ethnic origins, adjusting for study design factors. 143 published articles (42 countries, 374 349 subjects aged 1- 18 years, 74 847 myopia cases) were included. Increase in myopia prevalence with age varied by ethnicity. East Asians showed the highest prevalence, reaching 69% (95% credible intervals (CrI) 61% to 77%) at 15 years of age (86% among Singaporean-Chinese). Blacks in Africa had the lowest prevalence; 5.5% at 15 years (95% CrI 3% to 9%). Time trends in myopia prevalence over the last decade were small in whites, increased by 23% in East Asians, with a weaker increase among South Asians. Children from urban environments have 2.6 times the odds of myopia compared with those from rural environments. In whites and East Asians sex differences emerge at about 9 years of age; by late adolescence girls are twice as likely as boys to be myopic. Marked ethnic differences in age-specific prevalence of myopia exist. Rapid increases in myopia prevalence over time, particularly in East Asians, combined with a universally higher risk of myopia in urban settings, suggest that environmental factors play an important role in myopia development, which may offer scope for prevention.
Resumo:
In this work, we introduce the periodic nonlinear Fourier transform (PNFT) method as an alternative and efficacious tool for compensation of the nonlinear transmission effects in optical fiber links. In the Part I, we introduce the algorithmic platform of the technique, describing in details the direct and inverse PNFT operations, also known as the inverse scattering transform for periodic (in time variable) nonlinear Schrödinger equation (NLSE). We pay a special attention to explaining the potential advantages of the PNFT-based processing over the previously studied nonlinear Fourier transform (NFT) based methods. Further, we elucidate the issue of the numerical PNFT computation: we compare the performance of four known numerical methods applicable for the calculation of nonlinear spectral data (the direct PNFT), in particular, taking the main spectrum (utilized further in Part II for the modulation and transmission) associated with some simple example waveforms as the quality indicator for each method. We show that the Ablowitz-Ladik discretization approach for the direct PNFT provides the best performance in terms of the accuracy and computational time consumption.
Resumo:
BACKGROUND: Many adolescents have poor asthma control and impaired quality of life despite the availability of modern pharmacotherapy. Research suggests that poor adherence to treatment and limited engagement in self-management could be contributing factors. OBJECTIVE: To conduct a systematic review of the barriers and facilitators to self-management of asthma reported by adolescents using a narrative synthesis approach to integrate the findings. DESIGN: MEDLINE, EMBASE, CINAHL, and PsycINFO were searched for all types of study design. Full papers were retrieved for study abstracts that included data from participants aged 12-18 years referring to barriers or facilitators of asthma self-management behaviors. RESULTS: Sixteen studies (5 quantitative and 11 qualitative) underwent data extraction, quality appraisal, and thematic analysis. Six key themes were generated that encompassed barriers and/or facilitators to self-management of asthma in adolescents: Knowledge, Lifestyle, Beliefs and Attitudes, Relationships, Intrapersonal Characteristics, and Communication. CONCLUSIONS: There is a pressing need to prepare adolescents for self-management, using age-appropriate strategies that draw on the evidence we have synthesized. Current clinical practice should focus on ensuring adolescents have the correct knowledge, beliefs, and positive attitude to self-manage their illness. This needs to be delivered in a supportive environment that facilitates two-way communication, fosters adolescents' self-efficacy to manage their disease, and considers the wider social influences that impinge on self-management. Pediatr Pulmonol. 2016; 9999:XX-XX. © 2016 Wiley Periodicals, Inc.
Resumo:
This paper presents a framework based upon a relationship between environmental benefits and the investments and costs needed to implement and run company operations. As the results of environmental management become more evident it is proposed that the benefits rather than the environmental impacts are measured in the analysis of environmental performance. Four categories, or stages, are defined in this paper: “creative-green”, “expensive-green”, “inefficient-green or beginner”, and finally, the “complacent” stage. The paper describes the characteristics of each category and provides examples of indicators that could be used to measure environmental benefits. Qualitative and quantitative methods are necessary to classify companies according to the framework. It is believed that this paper can assist companies and public organisations to assess operations and projects considering their level of sustainability. The proposed framework can impact FDI and environmental policies in the public arena, and foster innovation on environmental practices within the private sector.
Resumo:
The return to methods focusing on language and experience following the dominance of experimental methods has in the last few decades led to debate, dialogue, and disagreement regarding the status of qualitative and quantitative methods. However, a recent focus on impact has brought an air of pragmatism to the research arena. In what ways, then, is psychology moving from entrenched mono methods approaches that have epitomised its development until recently, to describing and discussing ways in which mixed and pluralistic research can advance and contribute to further, deeper psychological understanding?.
Resumo:
A major problem in modern probabilistic modeling is the huge computational complexity involved in typical calculations with multivariate probability distributions when the number of random variables is large. Because exact computations are infeasible in such cases and Monte Carlo sampling techniques may reach their limits, there is a need for methods that allow for efficient approximate computations. One of the simplest approximations is based on the mean field method, which has a long history in statistical physics. The method is widely used, particularly in the growing field of graphical models. Researchers from disciplines such as statistical physics, computer science, and mathematical statistics are studying ways to improve this and related methods and are exploring novel application areas. Leading approaches include the variational approach, which goes beyond factorizable distributions to achieve systematic improvements; the TAP (Thouless-Anderson-Palmer) approach, which incorporates correlations by including effective reaction terms in the mean field theory; and the more general methods of graphical models. Bringing together ideas and techniques from these diverse disciplines, this book covers the theoretical foundations of advanced mean field methods, explores the relation between the different approaches, examines the quality of the approximation obtained, and demonstrates their application to various areas of probabilistic modeling.
Resumo:
The thesis presents a two-dimensional Risk Assessment Method (RAM) where the assessment of risk to the groundwater resources incorporates both the quantification of the probability of the occurrence of contaminant source terms, as well as the assessment of the resultant impacts. The approach emphasizes the need for a greater dependency on the potential pollution sources, rather than the traditional approach where assessment is based mainly on the intrinsic geo-hydrologic parameters. The risk is calculated using Monte Carlo simulation methods whereby random pollution events were generated to the same distribution as historically occurring events or a priori potential probability distribution. Integrated mathematical models then simulate contaminant concentrations at the predefined monitoring points within the aquifer. The spatial and temporal distributions of the concentrations were calculated from repeated realisations, and the number of times when a user defined concentration magnitude was exceeded is quantified as a risk. The method was setup by integrating MODFLOW-2000, MT3DMS and a FORTRAN coded risk model, and automated, using a DOS batch processing file. GIS software was employed in producing the input files and for the presentation of the results. The functionalities of the method, as well as its sensitivities to the model grid sizes, contaminant loading rates, length of stress periods, and the historical frequencies of occurrence of pollution events were evaluated using hypothetical scenarios and a case study. Chloride-related pollution sources were compiled and used as indicative potential contaminant sources for the case study. At any active model cell, if a random generated number is less than the probability of pollution occurrence, then the risk model will generate synthetic contaminant source term as an input into the transport model. The results of the applications of the method are presented in the form of tables, graphs and spatial maps. Varying the model grid sizes indicates no significant effects on the simulated groundwater head. The simulated frequency of daily occurrence of pollution incidents is also independent of the model dimensions. However, the simulated total contaminant mass generated within the aquifer, and the associated volumetric numerical error appear to increase with the increasing grid sizes. Also, the migration of contaminant plume advances faster with the coarse grid sizes as compared to the finer grid sizes. The number of daily contaminant source terms generated and consequently the total mass of contaminant within the aquifer increases in a non linear proportion to the increasing frequency of occurrence of pollution events. The risk of pollution from a number of sources all occurring by chance together was evaluated, and quantitatively presented as risk maps. This capability to combine the risk to a groundwater feature from numerous potential sources of pollution proved to be a great asset to the method, and a large benefit over the contemporary risk and vulnerability methods.
Resumo:
The purpose of the work described here has been to seek methods of narrowing the present gap between currently realised heat pump performance and the theoretical limit. The single most important pre-requisite to this objective is the identification and quantitative assessment of the various non-idealities and degradative phenomena responsible for the present shortfall. The use of availability analysis has been introduced as a diagnostic tool, and applied to a few very simple, highly idealised Rankine cycle optimisation problems. From this work, it has been demonstrated that the scope for improvement through optimisation is small in comparison with the extensive potential for improvement by reducing the compressor's losses. A fully instrumented heat pump was assembled and extensively tested. This furnished performance data, and led to an improved understanding of the systems behaviour. From a very simple analysis of the resulting compressor performance data, confirmation of the compressor's low efficiency was obtained. In addition, in order to obtain experimental data concerning specific details of the heat pump's operation, several novel experiments were performed. The experimental work was concluded with a set of tests which attempted to obtain definitive performance data for a small set of discrete operating conditions. These tests included an investigation of the effect of two compressor modifications. The resulting performance data was analysed by a sophisticated calculation which used that measurements to quantify each dagradative phenomenon occurring in that compressor, and so indicate where the greatest potential for improvement lies. Finally, in the light of everything that was learnt, specific technical suggestions have been made, to reduce the losses associated with both the refrigerant circuit and the compressor.