969 resultados para technology standard


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The stylized facts that motivate this thesis include the diversity in growth patterns that are observed across countries during the process of economic development, and the divergence over time in income distributions both within and across countries. This thesis constructs a dynamic general equilibrium model in which technology adoption is costly and agents are heterogeneous in their initial holdings of resources. Given the households‟ resource level, this study examines how adoption costs influence the evolution of household income over time and the timing of transition to more productive technologies. The analytical results of the model constructed here characterize three growth outcomes associated with the technology adoption process depending on productivity differences between the technologies. These are appropriately labeled as „poverty trap‟, „dual economy‟ and „balanced growth‟. The model is then capable of explaining the observed diversity in growth patterns across countries, as well as divergence of incomes over time. Numerical simulations of the model furthermore illustrate features of this transition. They suggest that that differences in adoption costs account for the timing of households‟ decision to switch technology which leads to a disparity in incomes across households in the technology adoption process. Since this determines the timing of complete adoption of the technology within a country, the implications for cross-country income differences are obvious. Moreover, the timing of technology adoption appears to be impacts on patterns of growth of households, which are different across various income groups. The findings also show that, in the presence of costs associated with the adoption of more productive technologies, inequalities of income and wealth may increase over time tending to delay the convergence in income levels. Initial levels of inequalities in the resources also have an impact on the date of complete adoption of more productive technologies. The issue of increasing income inequality in the process of technology adoption opens up another direction for research. Specifically increasing inequality implies that distributive conflicts may emerge during the transitional process with political- economy consequences. The model is therefore extended to include such issues. Without any political considerations, taxes would leads to a reduction in inequality and convergence of incomes across agents. However this process is delayed if politico-economic influences are taken into account. Moreover, the political outcome is sub optimal. This is essentially due to the fact that there is a resistance associated with the complete adoption of the advanced technology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Expert knowledge is valuable in many modelling endeavours, particularly where data is not extensive or sufficiently robust. In Bayesian statistics, expert opinion may be formulated as informative priors, to provide an honest reflection of the current state of knowledge, before updating this with new information. Technology is increasingly being exploited to help support the process of eliciting such information. This paper reviews the benefits that have been gained from utilizing technology in this way. These benefits can be structured within a six-step elicitation design framework proposed recently (Low Choy et al., 2009). We assume that the purpose of elicitation is to formulate a Bayesian statistical prior, either to provide a standalone expert-defined model, or for updating new data within a Bayesian analysis. We also assume that the model has been pre-specified before selecting the software. In this case, technology has the most to offer to: targeting what experts know (E2), eliciting and encoding expert opinions (E4), whilst enhancing accuracy (E5), and providing an effective and efficient protocol (E6). Benefits include: -providing an environment with familiar nuances (to make the expert comfortable) where experts can explore their knowledge from various perspectives (E2); -automating tedious or repetitive tasks, thereby minimizing calculation errors, as well as encouraging interaction between elicitors and experts (E5); -cognitive gains by educating users, enabling instant feedback (E2, E4-E5), and providing alternative methods of communicating assessments and feedback information, since experts think and learn differently; and -ensuring a repeatable and transparent protocol is used (E6).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the growth of high-technology industries and knowledge intensive services, the pursuit of industrial competitiveness has progressed from a broad concern with the processes of industrialisation to a more focused analysis of the factors explaining cross-national variation in the level of participation in knowledge industries. From an examination of cross-national data, the paper develops the proposition that particular elements of the domestic science, technology and industry infrastructure—such as the stock of knowledge and competence in the economy, the capacity for learning and generation of new ideas and the capacity to commercialise new ideas—vary cross-nationally and are related to the level of participation of a nation in knowledge intensive activities. Existing understandings of the role of the state in promoting industrial competitiveness might be expanded to incorporate an analysis of the contribution of the state through the building of competencies in science, technology and industry. Keywords: Knowledge; economy; comparative public policy; innovation; science and technology policy

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Business process modeling is widely regarded as one of the most popular forms of conceptual modeling. However, little is known about the capabilities and deficiencies of process modeling grammars and how existing deficiencies impact actual process modeling practice. This paper is a first contribution towards a theory-driven, exploratory empirical investigation of the ontological deficiencies of process modeling with the industry standard Business Process Modeling Notation (BPMN). We perform an analysis of BPMN using a theory of ontological expressiveness. Through a series of semi-structured interviews with BPMN adopters we explore empirically the actual use of this grammar. Nine ontological deficiencies related to the practice of modeling with BPMN are identified, for example, the capture of business rules and the specification of process decompositions. We also uncover five contextual factors that impact on the use of process modeling grammars, such as tool support and modeling conventions. We discuss implications for research and practice, highlighting the need for consideration of representational issues and contextual factors in decisions relating to BPMN adoption in organizations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Construction sector application of Lead Indicators generally and Positive Performance Indicators (PPIs) particularly, are largely seen by the sector as not providing generalizable indicators of safety effectiveness. Similarly, safety culture is often cited as an essential factor in improving safety performance, yet there is no known reliable way of measuring safety culture. This paper proposes that the accurate measurement of safety effectiveness and safety culture is a requirement for assessing safe behaviours, safety knowledge, effective communication and safety performance. Currently there are no standard national or international safety effectiveness indicators (SEIs) that are accepted by the construction industry. The challenge is that quantitative survey instruments developed for measuring safety culture and/ or safety climate are inherently flawed methodologically and do not produce reliable and representative data concerning attitudes to safety. Measures that combine quantitative and qualitative components are needed to provide a clear utility for safety effectiveness indicators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Establishing a nationwide Electronic Health Record system has become a primary objective for many countries around the world, including Australia, in order to improve the quality of healthcare while at the same time decreasing its cost. Doing so will require federating the large number of patient data repositories currently in use throughout the country. However, implementation of EHR systems is being hindered by several obstacles, among them concerns about data privacy and trustworthiness. Current IT solutions fail to satisfy patients’ privacy desires and do not provide a trustworthiness measure for medical data. This thesis starts with the observation that existing EHR system proposals suer from six serious shortcomings that aect patients’ privacy and safety, and medical practitioners’ trust in EHR data: accuracy and privacy concerns over linking patients’ existing medical records; the inability of patients to have control over who accesses their private data; the inability to protect against inferences about patients’ sensitive data; the lack of a mechanism for evaluating the trustworthiness of medical data; and the failure of current healthcare workflow processes to capture and enforce patient’s privacy desires. Following an action research method, this thesis addresses the above shortcomings by firstly proposing an architecture for linking electronic medical records in an accurate and private way where patients are given control over what information can be revealed about them. This is accomplished by extending the structure and protocols introduced in federated identity management to link a patient’s EHR to his existing medical records by using pseudonym identifiers. Secondly, a privacy-aware access control model is developed to satisfy patients’ privacy requirements. The model is developed by integrating three standard access control models in a way that gives patients access control over their private data and ensures that legitimate uses of EHRs are not hindered. Thirdly, a probabilistic approach for detecting and restricting inference channels resulting from publicly-available medical data is developed to guard against indirect accesses to a patient’s private data. This approach is based upon a Bayesian network and the causal probabilistic relations that exist between medical data fields. The resulting definitions and algorithms show how an inference channel can be detected and restricted to satisfy patients’ expressed privacy goals. Fourthly, a medical data trustworthiness assessment model is developed to evaluate the quality of medical data by assessing the trustworthiness of its sources (e.g. a healthcare provider or medical practitioner). In this model, Beta and Dirichlet reputation systems are used to collect reputation scores about medical data sources and these are used to compute the trustworthiness of medical data via subjective logic. Finally, an extension is made to healthcare workflow management processes to capture and enforce patients’ privacy policies. This is accomplished by developing a conceptual model that introduces new workflow notions to make the workflow management system aware of a patient’s privacy requirements. These extensions are then implemented in the YAWL workflow management system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Australia and many other countries worldwide, water used in the manufacture of concrete must be potable. At present, it is currently thought that concrete properties are highly influenced by the water type used and its proportion in the concrete mix, but actually there is little knowledge of the effects of different, alternative water sources used in concrete mix design. Therefore, the identification of the level and nature of contamination in available water sources and their subsequent influence on concrete properties is becoming increasingly important. Of most interest, is the recycled washout water currently used by batch plants as mixing water for concrete. Recycled washout water is the water used onsite for a variety of purposes, including washing of truck agitator bowls, wetting down of aggregate and run off. This report presents current information on the quality of concrete mixing water in terms of mandatory limits and guidelines on impurities as well as investigating the impact of recycled washout water on concrete performance. It also explores new sources of recycled water in terms of their quality and suitability for use in concrete production. The complete recycling of washout water has been considered for use in concrete mixing plants because of the great benefit in terms of reducing the cost of waste disposal cost and environmental conservation. The objective of this study was to investigate the effects of using washout water on the properties of fresh and hardened concrete. This was carried out by utilizing a 10 week sampling program from three representative sites across South East Queensland. The sample sites chosen represented a cross-section of plant recycling methods, from most effective to least effective. The washout water samples collected from each site were then analysed in accordance with Standards Association of Australia AS/NZS 5667.1 :1998. These tests revealed that, compared with tap water, the washout water was higher in alkalinity, pH, and total dissolved solids content. However, washout water with a total dissolved solids content of less than 6% could be used in the production of concrete with acceptable strength and durability. These results were then interpreted using chemometric techniques of Principal Component Analysis, SIMCA and the Multi-Criteria Decision Making methods PROMETHEE and GAIA were used to rank the samples from cleanest to unclean. It was found that even the simplest purifying processes provided water suitable for the manufacture of concrete form wash out water. These results were compared to a series of alternative water sources. The water sources included treated effluent, sea water and dam water and were subject to the same testing parameters as the reference set. Analysis of these results also found that despite having higher levels of both organic and inorganic properties, the waters complied with the parameter thresholds given in the American Standard Test Method (ASTM) C913-08. All of the alternative sources were found to be suitable sources of water for the manufacture of plain concrete.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A group key exchange (GKE) protocol allows a set of parties to agree upon a common secret session key over a public network. In this thesis, we focus on designing efficient GKE protocols using public key techniques and appropriately revising security models for GKE protocols. For the purpose of modelling and analysing the security of GKE protocols we apply the widely accepted computational complexity approach. The contributions of the thesis to the area of GKE protocols are manifold. We propose the first GKE protocol that requires only one round of communication and is proven secure in the standard model. Our protocol is generically constructed from a key encapsulation mechanism (KEM). We also suggest an efficient KEM from the literature, which satisfies the underlying security notion, to instantiate the generic protocol. We then concentrate on enhancing the security of one-round GKE protocols. A new model of security for forward secure GKE protocols is introduced and a generic one-round GKE protocol with forward security is then presented. The security of this protocol is also proven in the standard model. We also propose an efficient forward secure encryption scheme that can be used to instantiate the generic GKE protocol. Our next contributions are to the security models of GKE protocols. We observe that the analysis of GKE protocols has not been as extensive as that of two-party key exchange protocols. Particularly, the security attribute of key compromise impersonation (KCI) resilience has so far been ignored for GKE protocols. We model the security of GKE protocols addressing KCI attacks by both outsider and insider adversaries. We then show that a few existing protocols are not secure against KCI attacks. A new proof of security for an existing GKE protocol is given under the revised model assuming random oracles. Subsequently, we treat the security of GKE protocols in the universal composability (UC) framework. We present a new UC ideal functionality for GKE protocols capturing the security attribute of contributiveness. An existing protocol with minor revisions is then shown to realize our functionality in the random oracle model. Finally, we explore the possibility of constructing GKE protocols in the attribute-based setting. We introduce the concept of attribute-based group key exchange (AB-GKE). A security model for AB-GKE and a one-round AB-GKE protocol satisfying our security notion are presented. The protocol is generically constructed from a new cryptographic primitive called encapsulation policy attribute-based KEM (EP-AB-KEM), which we introduce in this thesis. We also present a new EP-AB-KEM with a proof of security assuming generic groups and random oracles. The EP-AB-KEM can be used to instantiate our generic AB-GKE protocol.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work is focussed on developing a commissioning procedure so that a Monte Carlo model, which uses BEAMnrc’s standard VARMLC component module, can be adapted to match a specific BrainLAB m3 micro-multileaf collimator (μMLC). A set of measurements are recommended, for use as a reference against which the model can be tested and optimised. These include radiochromic film measurements of dose from small and offset fields, as well as measurements of μMLC transmission and interleaf leakage. Simulations and measurements to obtain μMLC scatter factors are shown to be insensitive to relevant model parameters and are therefore not recommended, unless the output of the linear accelerator model is in doubt. Ultimately, this note provides detailed instructions for those intending to optimise a VARMLC model to match the dose delivered by their local BrainLAB m3 μMLC device.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study explores coteaching/cogenerative dialoguing with parents to investigate how it may be employed to engage parents more meaningfully in schools. The cogens provided a space where participants became aware of resources available for coteaching, made decisions about planning and enacting coteaching, as well as interstitial culture that facilitated positive parent-teacher relationships.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Life-cycle management (LCM) has been employed in the management of construction projects for many years in order to reduce whole life cost, time, risk and improve the service to owners. However, owing to lack of an effective information sharing platform, the current LCM of construction projects is not effectively used in the construction industry. Based upon the analysis of the information flow of LCM, a virutal prototyping (VP)-based communication and collaboration information platform is proposed. Following this, the platform is customized using DASSAULT sofware. The whole process of implementing the VP-based LCM are also discussed and, from a simple case study, it is demonstrated that the VP-based communication and collaboration information platform is an effective tool to support the LCM of construction projects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, the feasibility of difference imaging for improving the contrast of electronic portal imaging device (EPID) images is investigated. The difference imaging technique consists of the acquisition of two EPID images (with and without the placement of an additional layer of attenuating medium on the surface of the EPID)and the subtraction of one of these images from the other. The resulting difference image shows improved contrast, compared to a standard EPID image, since it is generated by lower-energy photons. Results of this study show that, ¯rstly, this method can produce images exhibiting greater contrast than is seen in standard megavoltage EPID images and that, secondly, the optimal thickness of attenuating material for producing a maximum contrast enhancement may vary with phantom thickness and composition. Further studies of the possibilities and limitations of the di®erence imaging technique, and the physics behind it, are therefore recommended.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a time and space-symmetric fractional diffusion equation (TSS-FDE) under homogeneous Dirichlet conditions and homogeneous Neumann conditions. The TSS-FDE is obtained from the standard diffusion equation by replacing the first-order time derivative by a Caputo fractional derivative, and the second order space derivative by a symmetric fractional derivative. First, a method of separating variables expresses the analytical solution of the TSS-FDE in terms of the Mittag--Leffler function. Second, we propose two numerical methods to approximate the Caputo time fractional derivative: the finite difference method; and the Laplace transform method. The symmetric space fractional derivative is approximated using the matrix transform method. Finally, numerical results demonstrate the effectiveness of the numerical methods and to confirm the theoretical claims.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a time and space-symmetric fractional diffusion equation (TSS-FDE) under homogeneous Dirichlet conditions and homogeneous Neumann conditions. The TSS-FDE is obtained from the standard diffusion equation by replacing the first-order time derivative by the Caputo fractional derivative and the second order space derivative by the symmetric fractional derivative. Firstly, a method of separating variables is used to express the analytical solution of the tss-fde in terms of the Mittag–Leffler function. Secondly, we propose two numerical methods to approximate the Caputo time fractional derivative, namely, the finite difference method and the Laplace transform method. The symmetric space fractional derivative is approximated using the matrix transform method. Finally, numerical results are presented to demonstrate the effectiveness of the numerical methods and to confirm the theoretical claims.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Knowledge of the accuracy of dose calculations in intensity-modulated radiotherapy of the head and neck is essential for clinical confidence in these highly conformal treatments. High dose gradients are frequently placed very close to critical structures, such as the spinal cord, and good coverage of complex shaped nodal target volumes is important for long term-local control. A phantom study is presented comparing the performance of standard clinical pencil-beam and collapsed-cone dose algorithms to Monte Carlo calculation and three-dimensional gel dosimetry measurement. All calculations and measurements are normalized to the median dose in the primary planning target volume, making this a purely relative study. The phantom simulates tissue, air and bone for a typical neck section and is treated using an inverse-planned 5-field IMRT treatment, similar in character to clinically used class solutions. Results indicate that the pencil-beam algorithm fails to correctly model the relative dose distribution surrounding the air cavity, leading to an overestimate of the target coverage. The collapsed-cone and Monte Carlo results are very similar, indicating that the clinical collapsed-cone algorithm is perfectly sufficient for routine clinical use. The gel measurement shows generally good agreement with the collapsed-cone and Monte Carlo calculated dose, particularly in the spinal cord dose and nodal target coverage, thus giving greater confidence in the use of this class solution.