399 resultados para alternative modeling approaches

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pesticide use in paddy rice production may contribute to adverse ecological effects in surface waters. Risk assessments conducted for regulatory purposes depend on the use of simulation models to determine predicted environment concentrations (PEC) of pesticides. Often tiered approaches are used, in which assessments at lower tiers are based on relatively simple models with conservative scenarios, while those at higher tiers have more realistic representations of physical and biochemical processes. This chapter reviews models commonly used for predicting the environmental fate of pesticides in rice paddies. Theoretical considerations, unique features, and applications are discussed. This review is expected to provide information to guide model selection for pesticide registration, regulation, and mitigation in rice production areas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the commercial food industry, demonstration of microbiological safety and thermal process equivalence often involves a mathematical framework that assumes log-linear inactivation kinetics and invokes concepts of decimal reduction time (DT), z values, and accumulated lethality. However, many microbes, particularly spores, exhibit inactivation kinetics that are not log linear. This has led to alternative modeling approaches, such as the biphasic and Weibull models, that relax strong log-linear assumptions. Using a statistical framework, we developed a novel log-quadratic model, which approximates the biphasic and Weibull models and provides additional physiological interpretability. As a statistical linear model, the log-quadratic model is relatively simple to fit and straightforwardly provides confidence intervals for its fitted values. It allows a DT-like value to be derived, even from data that exhibit obvious "tailing." We also showed how existing models of non-log-linear microbial inactivation, such as the Weibull model, can fit into a statistical linear model framework that dramatically simplifies their solution. We applied the log-quadratic model to thermal inactivation data for the spore-forming bacterium Clostridium botulinum and evaluated its merits compared with those of popular previously described approaches. The log-quadratic model was used as the basis of a secondary model that can capture the dependence of microbial inactivation kinetics on temperature. This model, in turn, was linked to models of spore inactivation of Sapru et al. and Rodriguez et al. that posit different physiological states for spores within a population. We believe that the log-quadratic model provides a useful framework in which to test vitalistic and mechanistic hypotheses of inactivation by thermal and other processes. Copyright © 2009, American Society for Microbiology. All Rights Reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Business process modeling has undoubtedly emerged as a popular and relevant practice in Information Systems. Despite being an actively researched field, anecdotal evidence and experiences suggest that the focus of the research community is not always well aligned with the needs of industry. The main aim of this paper is, accordingly, to explore the current issues and the future challenges in business process modeling, as perceived by three key stakeholder groups (academics, practitioners, and tool vendors). We present the results of a global Delphi study with these three groups of stakeholders, and discuss the findings and their implications for research and practice. Our findings suggest that the critical areas of concern are standardization of modeling approaches, identification of the value proposition of business process modeling, and model-driven process execution. These areas are also expected to persist as business process modeling roadblocks in the future.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose – Financial information about costs and return on investments are of key importance to strategic decision-making but also in the context of process improvement or business engineering. In this paper we propose a value-oriented approach to business process modeling based on key concepts and metrics from operations and financial management, to aid decision making in process re-design projects on the basis of process models. Design/methodology/approach – We suggest a theoretically founded extension to current process modeling approaches, and delineate a framework as well as methodical support to incorporate financial information into process re-design. We use two case studies to evaluate the suggested approach. Findings – Based on two case studies, we show that the value-oriented process modeling approach facilitates and improves managerial decision-making in the context of process re-design. Research limitations / implications – We present design work and two case studies. More research is needed to more thoroughly evaluate the presented approach in a variety of real-life process modeling settings. Practical implications – We show how our approach enables decision makers to make investment decisions in process re-design projects, and also how other decisions, for instance in the context of enterprise architecture design, can be facilitated. Originality/value – This study reports on an attempt to integrate financial considerations into the act of process modeling, in order to provide more comprehensive decision making support in process re-design projects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Advances in symptom management strategies through a better understanding of cancer symptom clusters depend on the identification of symptom clusters that are valid and reliable. The purpose of this exploratory research was to investigate alternative analytical approaches to identify symptom clusters for patients with cancer, using readily accessible statistical methods, and to justify which methods of identification may be appropriate for this context. Three studies were undertaken: (1) a systematic review of the literature, to identify analytical methods commonly used for symptom cluster identification for cancer patients; (2) a secondary data analysis to identify symptom clusters and compare alternative methods, as a guide to best practice approaches in cross-sectional studies; and (3) a secondary data analysis to investigate the stability of symptom clusters over time. The systematic literature review identified, in 10 years prior to March 2007, 13 cross-sectional studies implementing multivariate methods to identify cancer related symptom clusters. The methods commonly used to group symptoms were exploratory factor analysis, hierarchical cluster analysis and principal components analysis. Common factor analysis methods were recommended as the best practice cross-sectional methods for cancer symptom cluster identification. A comparison of alternative common factor analysis methods was conducted, in a secondary analysis of a sample of 219 ambulatory cancer patients with mixed diagnoses, assessed within one month of commencing chemotherapy treatment. Principal axis factoring, unweighted least squares and image factor analysis identified five consistent symptom clusters, based on patient self-reported distress ratings of 42 physical symptoms. Extraction of an additional cluster was necessary when using alpha factor analysis to determine clinically relevant symptom clusters. The recommended approaches for symptom cluster identification using nonmultivariate normal data were: principal axis factoring or unweighted least squares for factor extraction, followed by oblique rotation; and use of the scree plot and Minimum Average Partial procedure to determine the number of factors. In contrast to other studies which typically interpret pattern coefficients alone, in these studies symptom clusters were determined on the basis of structure coefficients. This approach was adopted for the stability of the results as structure coefficients are correlations between factors and symptoms unaffected by the correlations between factors. Symptoms could be associated with multiple clusters as a foundation for investigating potential interventions. The stability of these five symptom clusters was investigated in separate common factor analyses, 6 and 12 months after chemotherapy commenced. Five qualitatively consistent symptom clusters were identified over time (Musculoskeletal-discomforts/lethargy, Oral-discomforts, Gastrointestinaldiscomforts, Vasomotor-symptoms, Gastrointestinal-toxicities), but at 12 months two additional clusters were determined (Lethargy and Gastrointestinal/digestive symptoms). Future studies should include physical, psychological, and cognitive symptoms. Further investigation of the identified symptom clusters is required for validation, to examine causality, and potentially to suggest interventions for symptom management. Future studies should use longitudinal analyses to investigate change in symptom clusters, the influence of patient related factors, and the impact on outcomes (e.g., daily functioning) over time.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Physical education teacher education (PETE) programmes have been identified as a critical platform to encourage the exploration of alternative teaching approaches by pre-service teachers. However, the socio-cultural constraint of acculturation or past physical education and sporting experiences results in the maintenance of the status quo of a teacher-driven, reproductive paradigm. Previous studies have reported successfully overcoming the powerful influence of acculturation, resulting in a change in PETE students’ custodial teaching beliefs and receptiveness to alternative teaching approaches. However, to date, limited information has been reported about how PETE students’ acculturation shaped their receptiveness to an alternative teaching approach. This is particularly the case for PETE recruits identified in the literature as most resistant to change. Purpose To explore the features and experiences of an alternative games teaching approach that appealed to PETE recruits’ identified as most resistant to change, requiring a specific sample of PETE recruits with strong, custodial, traditional physical education teaching beliefs, and whom are high achieving sporting products of this traditional culture. The alternative teaching approach explored in this study is the constraints-led approach (CLA), which is similar operationally to TGfU, but distinguished by a neurobiological theoretical framework (nonlinear pedagogy) that informs learning design. Participants and Setting A purposive sample of 10 Australian PETE students was recruited for the study. All participants initially had strong, custodial, traditional physical education teaching beliefs, and were successful sporting products of this teaching approach. After experiencing the CLA as learners during a games unit, participants demonstrated receptiveness to the alternative pedagogy. Data Collection and Analysis Semi-structured interviews and written reflections were sources of data collection. Each participant was interviewed separately, once prior to participation in the games unit to explore their positive physical education experiences, and then again after participation to explore the specific games unit learning experiences that influenced their receptiveness to the alternative pedagogy. Participants completed written reflections about their personal experiences after selected practical sessions. Data were qualitatively analysed using grounded theory. Findings: Thorough examination of the data resulted in establishment of two prominent themes related to the appeal of the CLA for the participants: (i) psychomotor (effective in developing skill), and (ii), inclusivity (included students of varying skill level). The efficacy of the CLA in skill development was clearly an important mediator of receptiveness for highly successful products of a traditional culture. This significant finding could be explained by three key factors: the acculturation of the participants, the motor learning theory underpinning the alternative pedagogy and the unit learning design and delivery. The inclusive nature of the CLA provided a solution to the problem of exclusion, which also made the approach attractive to participants. Conclusion PETE educators could consider these findings when introducing an alternative pedagogy aimed at challenging PETE recruits’ custodial, traditional teaching beliefs. To mediate receptiveness, it is important that the learning theory underpinning the alternative approach is operationalised in a research-informed pedagogical learning design that facilitates students’ perceptions of the effectiveness of the approach through experiencing and or observing it working.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article considers the merits of alternative policy approaches to management of companies in insolvency administration, in particular from an identity economics theoretical perspective. The use of this perspective provides a novel assessment of the policy alternatives for insolvency administration, which can be characterized as either following the more flexible United States Chapter 11-style debtor-in-possession arrangement, or relying on the appointment of an external administrator or trustee to manage the insolvent company who automatically displaces incumbent management. This analysis indicates that stigma and reputational damage from automatic removal of managers in voluntary administration leaders to "identity loss" and that an insider alternative to the current external administration approach could be a beneficial policy change.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we examine the design of business process diagrams in contexts where novice analysts only have basic design tools such as paper and pencils available, and little to no understanding of formalized modeling approaches. Based on a quasi-experimental study with 89 BPM students, we identify five distinct process design archetypes ranging from textual to hybrid, and graphical representation forms. We also examine the quality of the designs and identify which representation formats enable an analyst to articulate business rules, states, events, activities, temporal and geospatial information in a process model. We found that the quality of the process designs decreases with the increased use of graphics and that hybrid designs featuring appropriate text labels and abstract graphical forms are well-suited to describe business processes. Our research has implications for practical process design work in industry as well as for academic curricula on process design.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In a digital world, users’ Personally Identifiable Information (PII) is normally managed with a system called an Identity Management System (IMS). There are many types of IMSs. There are situations when two or more IMSs need to communicate with each other (such as when a service provider needs to obtain some identity information about a user from a trusted identity provider). There could be interoperability issues when communicating parties use different types of IMS. To facilitate interoperability between different IMSs, an Identity Meta System (IMetS) is normally used. An IMetS can, at least theoretically, join various types of IMSs to make them interoperable and give users the illusion that they are interacting with just one IMS. However, due to the complexity of an IMS, attempting to join various types of IMSs is a technically challenging task, let alone assessing how well an IMetS manages to integrate these IMSs. The first contribution of this thesis is the development of a generic IMS model called the Layered Identity Infrastructure Model (LIIM). Using this model, we develop a set of properties that an ideal IMetS should provide. This idealized form is then used as a benchmark to evaluate existing IMetSs. Different types of IMS provide varying levels of privacy protection support. Unfortunately, as observed by Jøsang et al (2007), there is insufficient privacy protection in many of the existing IMSs. In this thesis, we study and extend a type of privacy enhancing technology known as an Anonymous Credential System (ACS). In particular, we extend the ACS which is built on the cryptographic primitives proposed by Camenisch, Lysyanskaya, and Shoup. We call this system the Camenisch, Lysyanskaya, Shoup - Anonymous Credential System (CLS-ACS). The goal of CLS-ACS is to let users be as anonymous as possible. Unfortunately, CLS-ACS has problems, including (1) the concentration of power to a single entity - known as the Anonymity Revocation Manager (ARM) - who, if malicious, can trivially reveal a user’s PII (resulting in an illegal revocation of the user’s anonymity), and (2) poor performance due to the resource-intensive cryptographic operations required. The second and third contributions of this thesis are the proposal of two protocols that reduce the trust dependencies on the ARM during users’ anonymity revocation. Both protocols distribute trust from the ARM to a set of n referees (n > 1), resulting in a significant reduction of the probability of an anonymity revocation being performed illegally. The first protocol, called the User Centric Anonymity Revocation Protocol (UCARP), allows a user’s anonymity to be revoked in a user-centric manner (that is, the user is aware that his/her anonymity is about to be revoked). The second protocol, called the Anonymity Revocation Protocol with Re-encryption (ARPR), allows a user’s anonymity to be revoked by a service provider in an accountable manner (that is, there is a clear mechanism to determine which entity who can eventually learn - and possibly misuse - the identity of the user). The fourth contribution of this thesis is the proposal of a protocol called the Private Information Escrow bound to Multiple Conditions Protocol (PIEMCP). This protocol is designed to address the performance issue of CLS-ACS by applying the CLS-ACS in a federated single sign-on (FSSO) environment. Our analysis shows that PIEMCP can both reduce the amount of expensive modular exponentiation operations required and lower the risk of illegal revocation of users’ anonymity. Finally, the protocols proposed in this thesis are complex and need to be formally evaluated to ensure that their required security properties are satisfied. In this thesis, we use Coloured Petri nets (CPNs) and its corresponding state space analysis techniques. All of the protocols proposed in this thesis have been formally modeled and verified using these formal techniques. Therefore, the fifth contribution of this thesis is a demonstration of the applicability of CPN and its corresponding analysis techniques in modeling and verifying privacy enhancing protocols. To our knowledge, this is the first time that CPN has been comprehensively applied to model and verify privacy enhancing protocols. From our experience, we also propose several CPN modeling approaches, including complex cryptographic primitives (such as zero-knowledge proof protocol) modeling, attack parameterization, and others. The proposed approaches can be applied to other security protocols, not just privacy enhancing protocols.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Process modeling is an important design practice in organizational improvement projects. In this paper, we examine the design of business process diagrams in contexts where novice analysts only have basic design tools such as paper and pencils available, and little to no understanding of formalized modeling approaches. Based on a quasi-experimental study with 89 BPM students, we identify five distinct process design archetypes ranging from textual to hybrid and graphical representation forms. We examine the quality of the designs and identify which representation formats enable an analyst to articulate business rules, states, events, activities, temporal and geospatial information in a process model. We found that the quality of the process designs decreases with the increased use of graphics and that hybrid designs featuring appropriate text labels and abstract graphical forms appear well-suited to describe business processes. We further examine how process design preferences predict formalized process modeling ability. Our research has implications for practical process design work in industry as well as for academic curricula on process design.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nowadays, Workflow Management Systems (WfMSs) and, more generally, Process Management Systems (PMPs) are process-aware Information Systems (PAISs), are widely used to support many human organizational activities, ranging from well-understood, relatively stable and structures processes (supply chain management, postal delivery tracking, etc.) to processes that are more complicated, less structured and may exhibit a high degree of variation (health-care, emergency management, etc.). Every aspect of a business process involves a certain amount of knowledge which may be complex depending on the domain of interest. The adequate representation of this knowledge is determined by the modeling language used. Some processes behave in a way that is well understood, predictable and repeatable: the tasks are clearly delineated and the control flow is straightforward. Recent discussions, however, illustrate the increasing demand for solutions for knowledge-intensive processes, where these characteristics are less applicable. The actors involved in the conduct of a knowledge-intensive process have to deal with a high degree of uncertainty. Tasks may be hard to perform and the order in which they need to be performed may be highly variable. Modeling knowledge-intensive processes can be complex as it may be hard to capture at design-time what knowledge is available at run-time. In realistic environments, for example, actors lack important knowledge at execution time or this knowledge can become obsolete as the process progresses. Even if each actor (at some point) has perfect knowledge of the world, it may not be certain of its beliefs at later points in time, since tasks by other actors may change the world without those changes being perceived. Typically, a knowledge-intensive process cannot be adequately modeled by classical, state of the art process/workflow modeling approaches. In some respect there is a lack of maturity when it comes to capturing the semantic aspects involved, both in terms of reasoning about them. The main focus of the 1st International Workshop on Knowledge-intensive Business processes (KiBP 2012) was investigating how techniques from different fields, such as Artificial Intelligence (AI), Knowledge Representation (KR), Business Process Management (BPM), Service Oriented Computing (SOC), etc., can be combined with the aim of improving the modeling and the enactment phases of a knowledge-intensive process. The 1st International Workshop on Knowledge-intensive Business process (KiBP 2012) was held as part of the program of the 2012 Knowledge Representation & Reasoning International Conference (KR 2012) in Rome, Italy, in June 2012. The workshop was hosted by the Dipartimento di Ingegneria Informatica, Automatica e Gestionale Antonio Ruberti of Sapienza Universita di Roma, with financial support of the University, through grant 2010-C26A107CN9 TESTMED, and the EU Commission through the projects FP7-25888 Greener Buildings and FP7-257899 Smart Vortex. This volume contains the 5 papers accepted and presented at the workshop. Each paper was reviewed by three members of the internationally renowned Program Committee. In addition, a further paper was invted for inclusion in the workshop proceedings and for presentation at the workshop. There were two keynote talks, one by Marlon Dumas (Institute of Computer Science, University of Tartu, Estonia) on "Integrated Data and Process Management: Finally?" and the other by Yves Lesperance (Department of Computer Science and Engineering, York University, Canada) on "A Logic-Based Approach to Business Processes Customization" completed the scientific program. We would like to thank all the Program Committee members for the valuable work in selecting the papers, Andrea Marrella for his valuable work as publication and publicity chair of the workshop, and Carola Aiello and the consulting agency Consulta Umbria for the organization of this successful event.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article follows the lead of several researchers who claim there is an urgent need to utilize insights from the arts, aesthetics and the humanities to expand our understanding of leadership. It endeavours to do this by exploring the metaphor of dance. It begins by critiquing current policy metaphors used in the leadership literature that present a narrow and functional view of leadership. It presents and discusses a conceptual model of leadership as dance that incorporates key dimensions such as context, dance and music and includes Polyani’s concept of connoisseurship. This article identifies some of the tensions that are inherent in both notions of dance and leadership. The final part of the article discusses the implications the model raises for broadening our understanding of leadership and school leadership preparation programmes. Three core implications raised here are (i) making space for alternative metaphors in leadership preparation programmes; (ii) providing opportunities to students of leadership to understand through alternative learning approaches and (iii) providing opportunities for engagement in alternative research agendas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Atmospheric ultrafine particles play an important role in affecting human health, altering climate and degrading visibility. Numerous studies have been conducted to better understand the formation process of these particles, including field measurements, laboratory chamber studies and mathematical modeling approaches. Field studies on new particle formation found that formation processes were significantly affected by atmospheric conditions, such as the availability of particle precursors and meteorological conditions. However, those studies were mainly carried out in rural areas of the northern hemisphere and information on new particle formation in urban areas, especially those in subtropical regions, is limited. In general, subtropical regions display a higher level of solar radiation, along with stronger photochemical reactivity, than those regions investigated in previous studies. However, based on the results of these studies, the mechanisms involved in the new particle formation process remain unclear, particularly in the Southern Hemisphere. Therefore, in order to fill this gap in knowledge, a new particle formation study was conducted in a subtropical urban area in the Southern Hemisphere during 2009, which measured particle size distribution in different locations in Brisbane, Australia. Characterisation of nucleation events was conducted at the campus building of the Queensland University of Technology (QUT), located in an urban area of Brisbane. Overall, the annual average number concentrations of ultrafine, Aitken and nucleation mode particles were found to be 9.3 x 103, 3.7 x 103 and 5.6 x 103 cm-3, respectively. This was comparable to levels measured in urban areas of northern Europe, but lower than those from polluted urban areas such as the Yangtze River Delta, China and Huelva and Santa Cruz de Tenerife, Spain. Average particle number concentration (PNC) in the Brisbane region did not show significant seasonal variation, however a relatively large variation was observed during the warmer season. Diurnal variation of Aitken and nucleation mode particles displayed different patterns, which suggested that direct vehicle exhaust emissions were a major contributor of Aitken mode particles, while nucleation mode particles originated from vehicle exhaust emissions in the morning and photochemical production at around noon. A total of 65 nucleation events were observed during 2009, in which 40 events were classified as nucleation growth events and the remainder were nucleation burst events. An interesting observation in this study was that all nucleation growth events were associated with vehicle exhaust emission plumes, while the nucleation burst events were associated with industrial emission plumes from an industrial area. The average particle growth rate for nucleation events was found to be 4.6 nm hr-1 (ranging from 1.79-7.78 nm hr-1), which is comparable to other urban studies conducted in the United States, while monthly particle growth rates were found to be positively related to monthly solar radiation (r = 0.76, p <0.05). The particle growth rate values reported in this work are the first of their kind to be reported for the subtropical urban area of Australia. Furthermore, the influence of nucleation events on PNC within the urban airshed was also investigated. PNC was simultaneously measured at urban (QUT), roadside (Woolloongabba) and semi-urban (Rocklea) sites in Brisbane during 2009. Total PNC at these sites was found to be significantly affected by regional nucleation events. The relative fractions of PNC to total daily PNC observed at QUT, Woolloongabba and Rocklea were found to be 12%, 9% and 14%, respectively, during regional nucleation events. These values were higher than those observed as a result of vehicle exhaust emissions during weekday mornings, which ranged from 5.1-5.5% at QUT and Woolloongabba. In addition, PNC in the semi-urban area of Rocklea increased by a factor of 15.4 when it was upwind from urban pollution sources under the influence of nucleation burst events. Finally, we investigated the influence of sulfuric acid on new particle formation in the study region. A H2SO4 proxy was calculated by using [SO2], solar radiation and particle condensation sink data to represent the new particle production strength for the urban, roadside and semi-urban areas of Brisbane during the period June-July of 2009. The temporal variations of the H2SO4 proxies and the nucleation mode particle concentration were found to be in phase during nucleation events in the urban and roadside areas. In contrast, the peak of proxy concentration occurred 1-2 hr prior to the observed peak in nucleation mode particle concentration at the downwind semi-urban area of Brisbane. A moderate to strong linear relationship was found between the proxy and the freshly formed particles, with r2 values of 0.26-0.77 during the nucleation events. In addition, the log[H2SO4 proxy] required to produce new particles was found to be ~1.0 ppb Wm-2 s and below 0.5 ppb Wm-2 s for the urban and semi-urban areas, respectively. The particle growth rates were similar during nucleation events at the three study locations, with an average value of 2.7 ± 0.5 nm hr-1. This result suggested that a similar nucleation mechanism dominated in the study region, which was strongly related to sulphuric acid concentration, however the relationship between the proxy and PNC was poor in the semi-urban area of Rocklea. This can be explained by the fact that the nucleation process was initiated upwind of the site and the resultant particles were transported via the wind to Rocklea. This explanation is also supported by the higher geometric mean diameter value observed for particles during the nucleation event and the time lag relationship between the H2SO4 proxy and PNC observed at Rocklea. In summary, particle size distribution was continuously measured in a subtropical urban area of southern hemisphere during 2009, the findings from which formed the first particle size distribution dataset in the study region. The characteristics of nucleation events in the Brisbane region were quantified and the properties of the nucleation growth and burst events are discussed in detail using a case studies approach. To further investigate the influence of nucleation events on PNC in the study region, PNC was simultaneously measured at three locations to examine the spatial variation of PNC during the regional nucleation events. In addition, the impact of upwind urban pollution on the downwind semi-urban area was quantified during these nucleation events. Sulphuric acid was found to be an important factor influencing new particle formation in the urban and roadside areas of the study region, however, a direct relationship with nucleation events at the semi-urban site was not observed. This study provided an overview of new particle formation in the Brisbane region, and its influence on PNC in the surrounding area. The findings of this work are the first of their kind for an urban area in the southern hemisphere.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Grid connected photovoltaic (PV) inverters fall into three broad categories - central, string and module integrated converters (MICs). MICs offer many advantages in performance and flexibility, but are at a cost disadvantage. Two alternative novel approaches proposed by the author - cascaded dc-dc MICs and bypass dc-dc MICs - integrate a simple non-isolated intelligent dc-dc converter with each PV module to provide the advantages of dc-ac MICs at a lower cost. A suitable universal 150 W 5 A dc-dc converter design is presented based on two interleaved MOSFET half bridges. Testing shows zero voltage switching (ZVS) keeps losses under 1 W for bi-directional power flows up to 15 W between two adjacent 12 V PV modules for the bypass application, and efficiencies over 94% for most of the operational power range for the cascaded converter application. Based on the experimental results, potential optimizations to further reduce losses are discussed.