203 resultados para Logic and Probabilistic Models
Resumo:
This paper deals with causal effect estimation strategies in highly heterogeneous empirical settings such as entrepreneurship. We argue that the clearer used of modern tools developed to deal with the estimation of causal effects in combination with our analysis of different sources of heterogeneity in entrepreneurship can lead to entrepreneurship with higher internal validity. We specifically lend support from the counterfactual logic and modern research of estimation strategies for causal effect estimation.
Resumo:
Fouling of industrial surfaces by silica and calcium oxalate can be detrimental to a number of process streams. Solution chemistry plays a large roll in the rate and type of scale formed on industrial surfaces. This study is on the kinetics and thermodynamics of SiO2 and calcium oxalate composite formation in solutions containing Mg2+ ions, trans-aconitic acid and sucrose, to mimic factory sugar cane juices. The induction time (ti) of silicic acid polymerization is found to be dependent on the sucrose concentration and SiO2 supersaturation ratio (SS). Generalized kinetic and solubility models are developed for SiO2 and calcium oxalate in binary systems using response surface methodology. The role of sucrose, Mg, trans-aconitic acid, a mixture of Mg and trans-aconitic acid, SiO2 SS ratio and Ca in the formation of com- posites is explained using the solution properties of these species including their ability to form complexes.
Resumo:
Background: Associations between sitting-time and physical activity (PA) with depression are unclear. Purpose: To examine concurrent and prospective associations between both sitting-time and PA with prevalent depressive symptoms in mid-aged Australian women. Methods: Data were from 8,950 women, aged 50-55 years in 2001, who completed mail surveys in 2001, 2004, 2007 and 2010. Depressive symptoms were assessed using the Center for Epidemiological Studies Depression questionnaire. Associations between sitting-time (≤4, >4-7, >7 hrs/day) and PA (none, some, meeting guidelines) with depressive symptoms (symptoms/no symptoms) were examined in 2011 in concurrent and lagged mixed effect logistic modeling. Both main effects and interaction models were developed. Results: In main effects modeling, women who sat >7 hrs/day (OR 1.47, 95%CI 1.29-1.67) and women who did no PA (OR 1.99, 95%CI 1.75-2.27) were more likely to have depressive symptoms than women who sat ≤4 hrs/day and who met PA guidelines, respectively. In interaction modeling, the likelihood of depressive symptoms in women who sat >7 hrs/day and did no PA was triple that of women who sat ≤4 hrs/day and met PA guidelines (OR 2.96, 95%CI 2.37-3.69). In prospective main effects and interaction modeling, sitting-time was not associated with depressive symptoms, but women who did no PA were more likely than those who met PA guidelines to have future depressive symptoms (OR 1.26, 95%CI 1.08-1.47). Conclusions: Increasing PA to a level commensurate with PA guidelines can alleviate current depression symptoms and prevent future symptoms in mid-aged women. Reducing sitting-time may ameliorate current symptoms.
Resumo:
The health impacts of exposure to ambient temperature have been drawing increasing attention from the environmental health research community, government, society, industries, and the public. Case-crossover and time series models are most commonly used to examine the effects of ambient temperature on mortality. However, some key methodological issues remain to be addressed. For example, few studies have used spatiotemporal models to assess the effects of spatial temperatures on mortality. Few studies have used a case-crossover design to examine the delayed (distributed lag) and non-linear relationship between temperature and mortality. Also, little evidence is available on the effects of temperature changes on mortality, and on differences in heat-related mortality over time. This thesis aimed to address the following research questions: 1. How to combine case-crossover design and distributed lag non-linear models? 2. Is there any significant difference in effect estimates between time series and spatiotemporal models? 3. How to assess the effects of temperature changes between neighbouring days on mortality? 4. Is there any change in temperature effects on mortality over time? To combine the case-crossover design and distributed lag non-linear model, datasets including deaths, and weather conditions (minimum temperature, mean temperature, maximum temperature, and relative humidity), and air pollution were acquired from Tianjin China, for the years 2005 to 2007. I demonstrated how to combine the case-crossover design with a distributed lag non-linear model. This allows the case-crossover design to estimate the non-linear and delayed effects of temperature whilst controlling for seasonality. There was consistent U-shaped relationship between temperature and mortality. Cold effects were delayed by 3 days, and persisted for 10 days. Hot effects were acute and lasted for three days, and were followed by mortality displacement for non-accidental, cardiopulmonary, and cardiovascular deaths. Mean temperature was a better predictor of mortality (based on model fit) than maximum or minimum temperature. It is still unclear whether spatiotemporal models using spatial temperature exposure produce better estimates of mortality risk compared with time series models that use a single site’s temperature or averaged temperature from a network of sites. Daily mortality data were obtained from 163 locations across Brisbane city, Australia from 2000 to 2004. Ordinary kriging was used to interpolate spatial temperatures across the city based on 19 monitoring sites. A spatiotemporal model was used to examine the impact of spatial temperature on mortality. A time series model was used to assess the effects of single site’s temperature, and averaged temperature from 3 monitoring sites on mortality. Squared Pearson scaled residuals were used to check the model fit. The results of this study show that even though spatiotemporal models gave a better model fit than time series models, spatiotemporal and time series models gave similar effect estimates. Time series analyses using temperature recorded from a single monitoring site or average temperature of multiple sites were equally good at estimating the association between temperature and mortality as compared with a spatiotemporal model. A time series Poisson regression model was used to estimate the association between temperature change and mortality in summer in Brisbane, Australia during 1996–2004 and Los Angeles, United States during 1987–2000. Temperature change was calculated by the current day's mean temperature minus the previous day's mean. In Brisbane, a drop of more than 3 �C in temperature between days was associated with relative risks (RRs) of 1.16 (95% confidence interval (CI): 1.02, 1.31) for non-external mortality (NEM), 1.19 (95% CI: 1.00, 1.41) for NEM in females, and 1.44 (95% CI: 1.10, 1.89) for NEM aged 65.74 years. An increase of more than 3 �C was associated with RRs of 1.35 (95% CI: 1.03, 1.77) for cardiovascular mortality and 1.67 (95% CI: 1.15, 2.43) for people aged < 65 years. In Los Angeles, only a drop of more than 3 �C was significantly associated with RRs of 1.13 (95% CI: 1.05, 1.22) for total NEM, 1.25 (95% CI: 1.13, 1.39) for cardiovascular mortality, and 1.25 (95% CI: 1.14, 1.39) for people aged . 75 years. In both cities, there were joint effects of temperature change and mean temperature on NEM. A change in temperature of more than 3 �C, whether positive or negative, has an adverse impact on mortality even after controlling for mean temperature. I examined the variation in the effects of high temperatures on elderly mortality (age . 75 years) by year, city and region for 83 large US cities between 1987 and 2000. High temperature days were defined as two or more consecutive days with temperatures above the 90th percentile for each city during each warm season (May 1 to September 30). The mortality risk for high temperatures was decomposed into: a "main effect" due to high temperatures using a distributed lag non-linear function, and an "added effect" due to consecutive high temperature days. I pooled yearly effects across regions and overall effects at both regional and national levels. The effects of high temperature (both main and added effects) on elderly mortality varied greatly by year, city and region. The years with higher heat-related mortality were often followed by those with relatively lower mortality. Understanding this variability in the effects of high temperatures is important for the development of heat-warning systems. In conclusion, this thesis makes contribution in several aspects. Case-crossover design was combined with distribute lag non-linear model to assess the effects of temperature on mortality in Tianjin. This makes the case-crossover design flexibly estimate the non-linear and delayed effects of temperature. Both extreme cold and high temperatures increased the risk of mortality in Tianjin. Time series model using single site’s temperature or averaged temperature from some sites can be used to examine the effects of temperature on mortality. Temperature change (no matter significant temperature drop or great temperature increase) increases the risk of mortality. The high temperature effect on mortality is highly variable from year to year.
Resumo:
Organisations are engaging in e-learning as a mechanism for delivering flexible learning to meet the needs of individuals and organisations. In light of the increasing use and organisational investment in e-learning, the need for methods to evaluate the success of its design and implementation seems more important than ever. To date, developing a standard for the evaluation of e-learning appears to have eluded both academics and practitioners. The currently accepted evaluation methods for e-learning are traditional learning and development models, such as Kirkpatrick’s model (1976). Due to the technical nature of e-learning it is important to broaden the scope and consider other evaluation models or techniques, such as the DeLone and McLean Information Success Model, that may be applicable to the e-learning domain. Research into the use of e-learning courses has largely avoided considering the applicability of information systems research. Given this observation, it is reasonable to conclude that e-learning implementation decisions and practice could be overlooking useful or additional viewpoints. This research investigated how existing evaluation models apply in the context of organisational e-learning, and resulted in an Organisational E-learning success Framework, which identifies the critical elements for success in an e-learning environment. In particular this thesis highlights the critical importance of three e-learning system creation elements; system quality, information quality, and support quality. These elements were explored in depth and the nature of each element is described in detail. In addition, two further elements were identified as factors integral to the success of an e-learning system; learner preferences and change management. Overall, this research has demonstrated the need for a holistic approach to e-learning evaluation. Furthermore, it has shown that the application of both traditional training evaluation approaches and the D&M IS Success Model are appropriate to the organisational e-learning context, and when combined can provide this holistic approach. Practically, this thesis has reported the need for organisations to consider evaluation at all stages of e-learning from design through to implementation.
Resumo:
Travelling wave phenomena are observed in many biological applications. Mathematical theory of standard reaction-diffusion problems shows that simple partial differential equations exhibit travelling wave solutions with constant wavespeed and such models are used to describe, for example, waves of chemical concentrations, electrical signals, cell migration, waves of epidemics and population dynamics. However, as in the study of cell motion in complex spatial geometries, experimental data are often not consistent with constant wavespeed. Non-local spatial models have successfully been used to model anomalous diffusion and spatial heterogeneity in different physical contexts. In this paper, we develop a fractional model based on the Fisher-Kolmogoroff equation and analyse it for its wavespeed properties, attempting to relate the numerical results obtained from our simulations to experimental data describing enteric neural crest-derived cells migrating along the intact gut of mouse embryos. The model proposed essentially combines fractional and standard diffusion in different regions of the spatial domain and qualitatively reproduces the behaviour of neural crest-derived cells observed in the caecum and the hindgut of mouse embryos during in vivo experiments.
Resumo:
The underlying logic of enterprise policy is that there are impediments to change in economic systems that can be traced to the path-dependent behaviors of economic actors that prevent them from exploring new knowledge and new ways of doing things. Enterprise policy involves firm-level interventions delivered by distributed networks of business advisors coordinated by knowledge intermediaries. These metagovernance arrangements are able to disrupt the path-dependent behaviors of organizations. The logic and benefits of enterprise policy are explored through reference to public administration, strategic management and evolutionary theory, and three case studies.
Resumo:
This paper takes its root in a trivial observation: management approaches are unable to provide relevant guidelines to cope with uncertainty, and trust of our modern worlds. Thus, managers are looking for reducing uncertainty through information’s supported decision-making, sustained by ex-ante rationalization. They strive to achieve best possible solution, stability, predictability, and control of “future”. Hence, they turn to a plethora of “prescriptive panaceas”, and “management fads” to bring simple solutions through best practices. However, these solutions are ineffective. They address only one part of a system (e.g. an organization) instead of the whole. They miss the interactions and interdependencies with other parts leading to “suboptimization”. Further classical cause-effects investigations and researches are not very helpful to this regard. Where do we go from there? In this conversation, we want to challenge the assumptions supporting the traditional management approaches and shed some lights on the problem of management discourse fad using the concept of maturity and maturity models in the context of temporary organizations as support for reflexion. Global economy is characterized by use and development of standards and compliance to standards as a practice is said to enable better decision-making by managers in uncertainty, control complexity, and higher performance. Amongst the plethora of standards, organizational maturity and maturity models hold a specific place due to general belief in organizational performance as dependent variable of (business) processes continuous improvement, grounded on a kind of evolutionary metaphor. Our intention is neither to offer a new “evidence based management fad” for practitioners, nor to suggest research gap to scholars. Rather, we want to open an assumption-challenging conversation with regards to main stream approaches (neo-classical economics and organization theory), turning “our eyes away from the blinding light of eternal certitude towards the refracted world of turbid finitude” (Long, 2002, p. 44) generating what Bernstein has named “Cartesian Anxiety” (Bernstein, 1983, p. 18), and revisit the conceptualization of maturity and maturity models. We rely on conventions theory and a systemic-discursive perspective. These two lenses have both information & communication and self-producing systems as common threads. Furthermore the narrative approach is well suited to explore complex way of thinking about organizational phenomena as complex systems. This approach is relevant with our object of curiosity, i.e. the concept of maturity and maturity models, as maturity models (as standards) are discourses and systems of regulations. The main contribution of this conversation is that we suggest moving from a neo-classical “theory of the game” aiming at making the complex world simpler in playing the game, to a “theory of the rules of the game”, aiming at influencing and challenging the rules of the game constitutive of maturity models – conventions, governing systems – making compatible individual calculation and social context, and possible the coordination of relationships and cooperation between agents with or potentially divergent interests and values. A second contribution is the reconceptualization of maturity as structural coupling between conventions, rather than as an independent variable leading to organizational performance.
Resumo:
Migraine is a neurological disorder that affects the central nervous system causing painful attacks of headache. A genetic vulnerability and exposure to environmental triggers can influence the migraine phenotype. Migraine interferes in many facets of people’s daily life including employment commitments and their ability to look after their families resulting in a reduced quality of life. Identification of the biological processes that underlie this relatively common affliction has been difficult because migraine does not have any clearly identifiable pathology or structural lesion detectable by current medical technology. Theories to explain the symptoms of migraine have focused on the physiological mechanisms involved in the various phases of headache and include the vascular and neurogenic theories. In relation to migraine pathophysiology the trigeminovascular system and cortical spreading depression have also been implicated with supporting evidence from imaging studies and animal models. The objective of current research is to better understand the pathways and mechanisms involved in causing pain and headache to be able to target interventions. The genetic component of migraine has been teased apart using linkage studies and both candidate gene and genome-wide association studies, in family and case-control cohorts. Genomic regions that increase individual risk to migraine have been identified in neurological, vascular and hormonal pathways. This review discusses knowledge of the pathophysiology and genetic basis of migraine with the latest scientific evidence from genetic studies.
Resumo:
Solution chemistry plays a significant role in the rate and type of foulant formed on heated industrial surfaces. This paper describes the effect of sucrose, silica (SiO2), Ca2+ and Mg2+ ions, and trans-aconitic acid on the kinetics and solubility of SiO2 and calcium oxalate monohydrate (COM) in mixed salt solutions containing sucrose and refines models previously proposed. The developed SiO2 models show that sucrose and SiO2 concentrations are the main parameters that determine apparent order (n) and apparent rate of reaction (k) and SiO2 solubility over a 24 h period. The calcium oxalate solubility model shows that while increasing [Mg2+] increases COM solubility, the reverse is so with increasing sucrose concentrations. The role of solution species on COM crystal habit is discussed and the appearance of the uncommon (001) face is explained.
Resumo:
High-speed broadband internet access is widely recognised as a catalyst to social and economic development. However, the provision of broadband Internet services with the existing solutions to rural population, scattered over an extensive geographical area, remains both an economic and technical challenge. As a feasible solution, the Commonwealth Scientific and Industrial Research Organization (CSIRO) proposed a highly spectrally efficient, innovative and cost-effective fixed wireless broadband access technology, which uses analogue TV frequency spectrum and Multi-User MIMO (MUMIMO) technology with Orthogonal-Frequency-Division-Multiplexing (OFDM). MIMO systems have emerged as a promising solution for the increasing demand of higher data rates, better quality of service, and higher network capacity. However, the performance of MIMO systems can be significantly affected by different types of propagation environments e.g., indoor, outdoor urban, or outdoor rural and operating frequencies. For instance, large spectral efficiencies associated with MIMO systems, which assume a rich scattering environment in urban environments, may not be valid for all propagation environments, such as outdoor rural environments, due to the presence of less scatterer densities. Since this is the first time a MU-MIMO-OFDM fixed broadband wireless access solution is deployed in a rural environment, questions from both theoretical and practical standpoints arise; For example, what capacity gains are available for the proposed solution under realistic rural propagation conditions?. Currently, no comprehensive channel measurement and capacity analysis results are available for MU-MIMO-OFDM fixed broadband wireless access systems which employ large scale multiple antennas at the Access Point (AP) and analogue TV frequency spectrum in rural environments. Moreover, according to the literature, no deterministic MU-MIMO channel models exist that define rural wireless channels by accounting for terrain effects. This thesis fills the aforementioned knowledge gaps with channel measurements, channel modeling and comprehensive capacity analysis for MU-MIMO-OFDM fixed wireless broadband access systems in rural environments. For the first time, channel measurements were conducted in a rural farmland near Smithton, Tasmania using CSIRO's broadband wireless access solution. A novel deterministic MU-MIMO-OFDM channel model, which can be used for accurate performance prediction of rural MUMIMO channels with dominant Line-of-Sight (LoS) paths, was developed under this research. Results show that the proposed solution can achieve 43.7 bits/s/Hz at a Signal-to- Noise Ratio (SNR) of 20 dB in rural environments. Based on channel measurement results, this thesis verifies that the deterministic channel model accurately predicts channel capacity in rural environments with a Root Mean Square (RMS) error of 0.18 bits/s/Hz. Moreover, this study presents a comprehensive capacity analysis of rural MU-MIMOOFDM channels using experimental, simulated and theoretical models. Based on the validated deterministic model, further investigations on channel capacity and the eects of capacity variation, with different user distribution angles (θ) around the AP, were analysed. For instance, when SNR = 20dB, the capacity increases from 15.5 bits/s/Hz to 43.7 bits/s/Hz as θ increases from 10° to 360°. Strategies to mitigate these capacity degradation effects are also presented by employing a suitable user grouping method. Outcomes of this thesis have already been used by CSIRO scientists to determine optimum user distribution angles around the AP, and are of great significance for researchers and MU-MUMO-OFDM system developers to understand the advantages and potential capacity gains of MU-MIMO systems in rural environments. Also, results of this study are useful to further improve the performance of MU-MIMO-OFDM systems in rural environments. Ultimately, this knowledge contribution will be useful in delivering efficient, cost-effective high-speed wireless broadband systems that are tailor-made for rural environments, thus, improving the quality of life and economic prosperity of rural populations.
Resumo:
Many aspects of China's academic publishing system differ from the systems found in liberal market based economies of the United States, Western Europe and Australia. A high level of government intervention in both the publishing industry and academia and the challenges associated with attempting to make a transition from a centrally controlled towards a more market based publishing industry are two notable differences; however, as in other countries, academic communities and publishers are being transformed by digital technologies. This research explores the complex yet dynamic digital transformation of academic publishing in China, with a specific focus of the open and networked initiatives inspired by Web 2.0 and social media. The thesis draws on two case studies: Science Paper Online, a government-operated online preprint platform and open access mandate; and New Science, a social reference management website operated by a group of young PhD students. Its analysis of the innovations, business models, operating strategies, influences, and difficulties faced by these two initiatives highlights important characteristics and trends in digital publishing experiments in China. The central argument of this thesis is that the open and collaborative possibilities of Web 2.0 inspired initiatives are emerging outside the established journal and monograph publishing system in China, introducing innovative and somewhat disruptive approaches to the certification, communication and commercial exploitation of knowledge. Moreover, emerging publishing models are enabling and encouraging a new system of practising and communicating science in China, putting into practice some elements of the Open Science ethos. There is evidence of both disruptive change to old publishing structures and the adaptive modification of emergent replacements in the Chinese practice. As such, the transformation from traditional to digital and interactive modes of publishing, involves both competition and convergence between new and old publishers, as well as dynamics of co-evolution involving new technologies, business models, social norms, and government reform agendas. One key concern driving this work is whether there are new opportunities and new models for academic publishing in the Web 2.0 age and social media environment, which might allow the basic functions of communication and certification to be achieved more effectively. This thesis enriches existing knowledge of open and networked transformations of scholarly publishing by adding a Chinese story. Although the development of open and networked publishing platforms in China remains in its infancy, the lessons provided by this research are relevant to practitioners and stakeholders interested in understanding the transformative dynamics of networked technologies for publishing and advocating open access in practice, not only in China, but also internationally.
Resumo:
Electrostatic discharges have been identified as the most likely cause in a number of incidents of fire and explosion with unexplained ignitions. The lack of data and suitable models for this ignition mechanism creates a void in the analysis to quantify the importance of static electricity as a credible ignition mechanism. Quantifiable hazard analysis of the risk of ignition by static discharge cannot, therefore, be entirely carried out with our current understanding of this phenomenon. The study of electrostatics has been ongoing for a long time. However, it was not until the wide spread use of electronics that research was developed for the protection of electronics from electrostatic discharges. Current experimental models for electrostatic discharge developed for intrinsic safety with electronics are inadequate for ignition analysis and typically are not supported by theoretical analysis. A preliminary simulation and experiment with low voltage was designed to investigate the characteristics of energy dissipation and provided a basis for a high voltage investigation. It was seen that for a low voltage the discharge energy represents about 10% of the initial capacitive energy available and that the energy dissipation was within 10 ns of the initial discharge. The potential difference is greatest at the initial break down when the largest amount of the energy is dissipated. The discharge pathway is then established and minimal energy is dissipated as energy dissipation becomes greatly influenced by other components and stray resistance in the discharge circuit. From the initial low voltage simulation work, the importance of the energy dissipation and the characteristic of the discharge were determined. After the preliminary low voltage work was completed, a high voltage discharge experiment was designed and fabricated. Voltage and current measurement were recorded on the discharge circuit allowing the discharge characteristic to be recorded and energy dissipation in the discharge circuit calculated. Discharge energy calculations show consistency with the low voltage work relating to discharge energy with about 30-40% of the total initial capacitive energy being discharged in the resulting high voltage arc. After the system was characterised and operation validated, high voltage ignition energy measurements were conducted on a solution of n-Pentane evaporating in a 250 cm3 chamber. A series of ignition experiments were conducted to determine the minimum ignition energy of n-Pentane. The data from the ignition work was analysed with standard statistical regression methods for tests that return binary (yes/no) data and found to be in agreement with recent publications. The research demonstrates that energy dissipation is heavily dependent on the circuit configuration and most especially by the discharge circuit's capacitance and resistance. The analysis established a discharge profile for the discharges studied and validates the application of this methodology for further research into different materials and atmospheres; by systematically looking at discharge profiles of test materials with various parameters (e.g., capacitance, inductance, and resistance). Systematic experiments looking at the discharge characteristics of the spark will also help understand the way energy is dissipated in an electrostatic discharge enabling a better understanding of the ignition characteristics of materials in terms of energy and the dissipation of that energy in an electrostatic discharge.
Resumo:
This thesis explored the knowledge and reasoning of young children in solving novel statistical problems, and the influence of problem context and design on their solutions. It found that young children's statistical competencies are underestimated, and that problem design and context facilitated children's application of a wide range of knowledge and reasoning skills, none of which had been taught. A qualitative design-based research method, informed by the Models and Modeling perspective (Lesh & Doerr, 2003) underpinned the study. Data modelling activities incorporating picture story books were used to contextualise the problems. Children applied real-world understanding to problem solving, including attribute identification, categorisation and classification skills. Intuitive and metarepresentational knowledge together with inductive and probabilistic reasoning was used to make sense of data, and beginning awareness of statistical variation and informal inference was visible.
Resumo:
This paper compares the urbanization and planning in the two sunshine states of Florida and Queensland highlighting the similarities and differences, evaluates how effective the growth management programs have been, and examines the recent changes and the challenges they bring to the respective states.