254 resultados para Complexity of Distribution


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Chlamydia pecorum is a significant pathogen of domestic livestock and wildlife. We have developed a C. pecorum-specific multilocus sequence analysis (MLSA) scheme to examine the genetic diversity of and relationships between Australian sheep, cattle, and koala isolates. An MLSA of seven concatenated housekeeping gene fragments was performed using 35 isolates, including 18 livestock isolates (11 Australian sheep, one Australian cow, and six U.S. livestock isolates) and 17 Australian koala isolates. Phylogenetic analyses showed that the koala isolates formed a distinct clade, with limited clustering with C. pecorum isolates from Australian sheep. We identified 11 MLSA sequence types (STs) among Australian C. pecorum isolates, 10 of them novel, with koala and sheep sharing at least one identical ST (designated ST2013Aa). ST23, previously identified in global C. pecorum livestock isolates, was observed here in a subset of Australian bovine and sheep isolates. Most notably, ST23 was found in association with multiple disease states and hosts, providing insights into the transmission of this pathogen between livestock hosts. The complexity of the epidemiology of this disease was further highlighted by the observation that at least two examples of sheep were infected with different C. pecorum STs in the eyes and gastrointestinal tract. We have demonstrated the feasibility of our MLSA scheme for understanding the host relationship that exists between Australian C. pecorum strains and provide the first molecular epidemiological data on infections in Australian livestock hosts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a new hybrid evolutionary algorithm based on Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) for daily Volt/Var control in distribution system including Distributed Generators (DGs). Due to the small X/R ratio and radial configuration of distribution systems, DGs have much impact on this problem. Since DGs are independent power producers or private ownership, a price based methodology is proposed as a proper signal to encourage owners of DGs in active power generation. Generally, the daily Volt/Var control is a nonlinear optimization problem. Therefore, an efficient hybrid evolutionary method based on Particle Swarm Optimization and Ant Colony Optimization (ACO), called HPSO, is proposed to determine the active power values of DGs, reactive power values of capacitors and tap positions of transformers for the next day. The feasibility of the proposed algorithm is demonstrated and compared with methods based on the original PSO, ACO and GA algorithms on IEEE 34-bus distribution feeder.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The safety of passengers is a major concern to airports. In the event of crises, having an effective and efficient evacuation process in place can significantly aid in enhancing passenger safety. Hence, it is necessary for airport operators to have an in-depth understanding of the evacuation process of their airport terminal. Although evacuation models have been used in studying pedestrian behaviour for decades, little research has been done in considering the evacuees’ group dynamics and the complexity of the environment. In this paper, an agent-based model is presented to simulate passenger evacuation process. Different exits were allocated to passengers based on their location and security level. The simulation results show that the evacuation time can be influenced by passenger group dynamics. This model also provides a convenient way to design airport evacuation strategy and examine its efficiency. The model was created using AnyLogic software and its parameters were initialised using recent research data published in the literature.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We examine the security of the 64-bit lightweight block cipher PRESENT-80 against related-key differential attacks. With a computer search we are able to prove that for any related-key differential characteristic on full-round PRESENT-80, the probability of the characteristic only in the 64-bit state is not higher than 2−64. To overcome the exponential (in the state and key sizes) computational complexity of the search we use truncated differences, however as the key schedule is not nibble oriented, we switch to actual differences and apply early abort techniques to prune the tree-based search. With a new method called extended split approach we are able to make the whole search feasible and we implement and run it in real time. Our approach targets the PRESENT-80 cipher however,with small modifications can be reused for other lightweight ciphers as well.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Classical results in unconditionally secure multi-party computation (MPC) protocols with a passive adversary indicate that every n-variate function can be computed by n participants, such that no set of size t < n/2 participants learns any additional information other than what they could derive from their private inputs and the output of the protocol. We study unconditionally secure MPC protocols in the presence of a passive adversary in the trusted setup (‘semi-ideal’) model, in which the participants are supplied with some auxiliary information (which is random and independent from the participant inputs) ahead of the protocol execution (such information can be purchased as a “commodity” well before a run of the protocol). We present a new MPC protocol in the trusted setup model, which allows the adversary to corrupt an arbitrary number t < n of participants. Our protocol makes use of a novel subprotocol for converting an additive secret sharing over a field to a multiplicative secret sharing, and can be used to securely evaluate any n-variate polynomial G over a field F, with inputs restricted to non-zero elements of F. The communication complexity of our protocol is O(ℓ · n 2) field elements, where ℓ is the number of non-linear monomials in G. Previous protocols in the trusted setup model require communication proportional to the number of multiplications in an arithmetic circuit for G; thus, our protocol may offer savings over previous protocols for functions with a small number of monomials but a large number of multiplications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

How do you identify "good" teaching practice in the complexity of a real classroom? How do you know that beginning teachers can recognise effective digital pedagogy when they see it? How can teacher educators see through their students’ eyes? The study in this paper has arisen from our interest in what pre-service teachers “see” when observing effective classroom practice and how this might reveal their own technological, pedagogical and content knowledge. We asked 104 pre-service teachers from Early Years, Primary and Secondary cohorts to watch and comment upon selected exemplary videos of teachers using ICT (information and communication technologies) in Science. The pre-service teachers recorded their observations using a simple PMI (plus, minus, interesting) matrix which were then coded using the SOLO Taxonomy to look for evidence of their familiarity with and judgements of digital pedagogies. From this, we determined that the majority of preservice teachers we surveyed were using a descriptive rather than a reflective strategy, that is, not extending beyond what was demonstrated in the teaching exemplar or differentiating between action and purpose. We also determined that this method warrants wider trialling as a means of evaluating students’ understandings of the complexity of the digital classroom.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We have studied a mineral sample of mottramite PbCu(VO4)(OH) from Tsumeb, Namibia using a combination of scanning electron microscopy with EDX, Raman and infrared spectroscopy. Chemical analysis shows principally the elements V, Pb and Cu. Ca occurs as partial substitution of Pb as well as P and As in substitution to V. Minor amounts of Si and Cr were also observed. The Raman band of mottramite at 829 cm-1, is assigned to the ν1 symmetric (VO-4) ) stretching mode. The complexity of the spectra is attributed to the chemical composition of the Tsumeb mottramite. The ν3 antisymmetric vibrational mode of mottramite is observed as very low intensity bands at 716 and 747 cm-1. The series of Raman bands at 411, 439, 451 cm-1 and probably also the band at 500 cm-1 are assigned to the (VO-4) ν2 bending mode. The series of Raman bands at 293, 333 and 366 cm-1 are attributed to the (VO-4) ) ν4 bending modes. The ν3, ν3 and ν4 regions are complex for both minerals and this is attributed to symmetry reduction of the vanadate unit from Td to Cs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis opens up the design space for awareness research in CSCW and HCI. By challenging the prevalent understanding of roles in awareness processes and exploring different mechanisms for actively engaging users in the awareness process, this thesis provides a better understanding of the complexity of these processes and suggests practical solutions for designing and implementing systems that support active awareness. Mutual awareness, a prominent research topic in the fields of Computer-Supported Cooperative Work (CSCW) and Human-Computer Interaction (HCI) refers to a fundamental aspect of a person’s work: their ability to gain a better understanding of a situation by perceiving and interpreting their co-workers actions. Technologically-mediated awareness, used to support co-workers across distributed settings, distinguishes between the roles of the actor, whose actions are often limited to being the target of an automated data gathering processes, and the receiver, who wants to be made aware of the actors’ actions. This receiver-centric view of awareness, focusing on helping receivers to deal with complex sets of awareness information, stands in stark contrast to our understanding of awareness as social process involving complex interactions between both actors and receivers. It fails to take into account an actors’ intimate understanding of their own activities and the contribution that this subjective understanding could make in providing richer awareness information. In this thesis I challenge the prevalent receiver-centric notion of awareness, and explore the conceptual foundations, design, implementation and evaluation of an alternative active awareness approach by making the following five contributions. Firstly, I identify the limitations of existing awareness research and solicit further evidence to support the notion of active awareness. I analyse ethnographic workplace studies that demonstrate how actors engage in an intricate interplay involving the monitoring of their co-workers progress and displaying aspects of their activities that may be of relevance to others. The examination of a large body of awareness research reveals that while disclosing information is a common practice in face-to-face collaborative settings it has been neglected in implementations of technically mediated awareness. Based on these considerations, I introduce the notion of intentional disclosure to describe the action of users actively and deliberately contributing awareness information. I consider challenges and potential solutions for the design of active awareness. I compare a range of systems, each allowing users to share information about their activities at various levels of detail. I discuss one of the main challenges to active awareness: that disclosing information about activities requires some degree of effort. I discuss various representations of effort in collaborative work. These considerations reveal that there is a trade-off between the richness of awareness information and the effort required to provide this information. I propose a framework for active awareness, aimed to help designers to understand the scope and limitations of different types of intentional disclosure. I draw on the identified richness/effort trade-off to develop two types of intentional disclosure, both of which aim to facilitate the disclosure of information while reducing the effort required to do so. For both of these approaches, direct and indirect disclosure, I delineate how they differ from related approaches and define a set of design criteria that is intended to guide their implementation. I demonstrate how the framework of active awareness can be practically applied by building two proof-of-concept prototypes that implement direct and indirect disclosure respectively. AnyBiff, implementing direct disclosure, allows users to create, share and use shared representations of activities in order to express their current actions and intentions. SphereX, implementing indirect disclosure, represents shared areas of interests or working context, and links sets of activities to these representations. Lastly, I present the results of the qualitative evaluation of the two prototypes and analyse the results with regard to the extent to which they implemented their respective disclosure mechanisms and supported active awareness. Both systems were deployed and tested in real world environments. The results for AnyBiff showed that users developed a wide range of activity representations, some unanticipated, and actively used the system to disclose information. The results further highlighted a number of design considerations relating to the relationship between awareness and communication, and the role of ambiguity. The evaluation of SphereX validated the feasibility of the indirect disclosure approach. However, the study highlighted the challenges of implementing cross-application awareness support and translating the concept to users. The study resulted in design recommendations aimed to improve the implementation of future systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Perflurooctanoic acid (PFOA) and perfluorooctane sulfonic acid (PFOS) have been used for a variety of applications including fluoropolymer processing, fire-fighting foams and surface treatments since the 1950s. Both PFOS and PFOA are polyfluoroalkyl chemicals (PFCs), man-made compounds that are persistent in the environment and humans; some PFCs have shown adverse effects in laboratory animals. Here we describe the application of a simple one compartment pharmacokinetic model to estimate total intakes of PFOA and PFOS for the general population of urban areas on the east coast of Australia. Key parameters for this model include the elimination rate constants and the volume of distribution within the body. A volume of distribution was calibrated for PFOA to a value of 170ml/kgbw using data from two communities in the United States where the residents' serum concentrations could be assumed to result primarily from a known and characterized source, drinking water contaminated with PFOA by a single fluoropolymer manufacturing facility. For PFOS, a value of 230ml/kgbw was used, based on adjustment of the PFOA value. Applying measured Australian serum data to the model gave mean+/-standard deviation intake estimates of PFOA of 1.6+/-0.3ng/kgbw/day for males and females >12years of age combined based on samples collected in 2002-2003 and 1.3+/-0.2ng/kg bw/day based on samples collected in 2006-2007. Mean intakes of PFOS were 2.7+/-0.5ng/kgbw/day for males and females >12years of age combined based on samples collected in 2002-2003, and 2.4+/-0.5ng/kgbw/day for the 2006-2007 samples. ANOVA analysis was run for PFOA intake and demonstrated significant differences by age group (p=0.03), sex (p=0.001) and date of collection (p<0.001). Estimated intake rates were highest in those aged >60years, higher in males compared to females, and higher in 2002-2003 compared to 2006-2007. The same results were seen for PFOS intake with significant differences by age group (p<0.001), sex (p=0.001) and date of collection (p=0.016).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Automated process discovery techniques aim at extracting process models from information system logs. Existing techniques in this space are effective when applied to relatively small or regular logs, but generate spaghetti-like and sometimes inaccurate models when confronted to logs with high variability. In previous work, trace clustering has been applied in an attempt to reduce the size and complexity of automatically discovered process models. The idea is to split the log into clusters and to discover one model per cluster. This leads to a collection of process models – each one representing a variant of the business process – as opposed to an all-encompassing model. Still, models produced in this way may exhibit unacceptably high complexity and low fitness. In this setting, this paper presents a two-way divide-and-conquer process discovery technique, wherein the discovered process models are split on the one hand by variants and on the other hand hierarchically using subprocess extraction. Splitting is performed in a controlled manner in order to achieve user-defined complexity or fitness thresholds. Experiments on real-life logs show that the technique produces collections of models substantially smaller than those extracted by applying existing trace clustering techniques, while allowing the user to control the fitness of the resulting models.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Real world business process models may consist of hundreds of elements and have sophisticated structure. Although there are tasks where such models are valuable and appreciated, in general complexity has a negative influence on model comprehension and analysis. Thus, means for managing the complexity of process models are needed. One approach is abstraction of business process models-creation of a process model which preserves the main features of the initial elaborate process model, but leaves out insignificant details. In this paper we study the structural aspects of process model abstraction and introduce an abstraction approach based on process structure trees (PST). The developed approach assures that the abstracted process model preserves the ordering constraints of the initial model. It surpasses pattern-based process model abstraction approaches, allowing to handle graph-structured process models of arbitrary structure. We also provide an evaluation of the proposed approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A pro-fibrotic role of matrix metalloproteinase-9 (MMP-9) in tubular cell epithelial-mesenchymal transition (EMT) is well established in renal fibrosis; however studies from our group and others have demonstrated some previously unrecognized complexity of MMP-9 that has been overlooked in renal fibrosis. Therefore, the aim of this study was to determine the expression pattern, origin and the exact mechanism underlying the contribution of MMP-9 to unilateral ureteral obstruction (UUO), a well-established model of renal fibrosis via MMP-9 inhibition. Renal MMP-9 expression in BALB/c mice with UUO was examined on day 1, 3, 5, 7, 9, 11 and 14. To inhibit MMP-9 activity, MMP-2/9 inhibitor or MMP-9-neutralizing antibody was administered daily for 4 consecutive days from day 0-3, 6-9 or 10-13 and tissues harvested at day 14. In UUO, there was a bi-phasic early- and late-stage upregulation of MMP-9 activity. Interestingly, tubular epithelial cells (TECs) were the predominant source of MMP-9 during early stage, whereas TECs, macrophages and myofibroblasts produced MMP-9 during late-stage UUO. Early- and late-stage inhibition of MMP-9 in UUO mice significantly reduced tubular cell EMT and renal fibrosis. Moreover, MMP-9 inhibition caused a significant reduction in MMP-9-cleaved osteopontin and macrophage infiltration in UUO kidney. Our in vitro study showed MMP-9-cleaved osteopontin enhanced macrophage transwell migration and MMP-9 of both primary TEC and macrophage induced tubular cell EMT. In summary, our result suggests that MMP-9 of both TEC and macrophage origin may directly or indirectly contribute to the pathogenesis of renal fibrosis via osteopontin cleavage, which, in turn further recruit macrophage and induce tubular cell EMT. Our study also highlights the time dependency of its expression and the potential of stage-specific inhibition strategy against renal fibrosis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Service processes such as financial advice, booking a business trip or conducting a consulting project have emerged as units of analysis of high interest for the business process and service management communities in practice and academia. While the transactional nature of production processes is relatively well understood and deployed, the less predictable and highly interactive nature of service processes still lacks in many areas appropriate methodological grounding. This paper proposes a framework of a process laboratory as a new IT artefact in order to facilitate the holistic analysis and simulation of such service processes. Using financial services as an example, it will be shown how such a process laboratory can be used to reduce the complexity of service process analysis and facilitate operational service process control.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introduction Given the known challenges of obtaining accurate measurements of small radiation fields, and the increasing use of small field segments in IMRT beams, this study examined the possible effects of referencing inaccurate field output factors in the planning of IMRT treatments. Methods This study used the Brainlab iPlan treatment planning system to devise IMRT treatment plans for delivery using the Brainlab m3 microMLC (Brainlab, Feldkirchen, Germany). Four pairs of sample IMRT treatments were planned using volumes, beams and prescriptions that were based on a set of test plans described in AAPM TG 119’s recommendations for the commissioning of IMRT treatment planning systems [1]: • C1, a set of three 4 cm volumes with different prescription doses, was modified to reduce the size of the PTV to 2 cm across and to include an OAR dose constraint for one of the other volumes. • C2, a prostate treatment, was planned as described by the TG 119 report [1]. • C3, a head-and-neck treatment with a PTV larger than 10 cm across, was excluded from the study. • C4, an 8 cm long C-shaped PTV surrounding a cylindrical OAR, was planned as described in the TG 119 report [1] and then replanned with the length of the PTV reduced to 4 cm. Both plans in each pair used the same beam angles, collimator angles, dose reference points, prescriptions and constraints. However, one of each pair of plans had its beam modulation optimisation and dose calculation completed with reference to existing iPlan beam data and the other had its beam modulation optimisation and dose calculation completed with reference to revised beam data. The beam data revisions consisted of increasing the field output factor for a 0.6 9 0.6 cm2 field by 17 % and increasing the field output factor for a 1.2 9 1.2 cm2 field by 3 %. Results The use of different beam data resulted in different optimisation results with different microMLC apertures and segment weightings between the two plans for each treatment, which led to large differences (up to 30 % with an average of 5 %) between reference point doses in each pair of plans. These point dose differences are more indicative of the modulation of the plans than of any clinically relevant changes to the overall PTV or OAR doses. By contrast, the maximum, minimum and mean doses to the PTVs and OARs were smaller (less than 1 %, for all beams in three out of four pairs of treatment plans) but are more clinically important. Of the four test cases, only the shortened (4 cm) version of TG 119’s C4 plan showed substantial differences between the overall doses calculated in the volumes of interest using the different sets of beam data and thereby suggested that treatment doses could be affected by changes to small field output factors. An analysis of the complexity of this pair of plans, using Crowe et al.’s TADA code [2], indicated that iPlan’s optimiser had produced IMRT segments comprised of larger numbers of small microMLC leaf separations than in the other three test cases. Conclusion: The use of altered small field output factors can result in substantially altered doses when large numbers of small leaf apertures are used to modulate the beams, even when treating relatively large volumes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Several recently proposed ciphers, for example Rijndael and Serpent, are built with layers of small S-boxes interconnected by linear key-dependent layers. Their security relies on the fact, that the classical methods of cryptanalysis (e.g. linear or differential attacks) are based on probabilistic characteristics, which makes their security grow exponentially with the number of rounds N r r. In this paper we study the security of such ciphers under an additional hypothesis: the S-box can be described by an overdefined system of algebraic equations (true with probability 1). We show that this is true for both Serpent (due to a small size of S-boxes) and Rijndael (due to unexpected algebraic properties). We study general methods known for solving overdefined systems of equations, such as XL from Eurocrypt’00, and show their inefficiency. Then we introduce a new method called XSL that uses the sparsity of the equations and their specific structure. The XSL attack uses only relations true with probability 1, and thus the security does not have to grow exponentially in the number of rounds. XSL has a parameter P, and from our estimations is seems that P should be a constant or grow very slowly with the number of rounds. The XSL attack would then be polynomial (or subexponential) in N r> , with a huge constant that is double-exponential in the size of the S-box. The exact complexity of such attacks is not known due to the redundant equations. Though the presented version of the XSL attack always gives always more than the exhaustive search for Rijndael, it seems to (marginally) break 256-bit Serpent. We suggest a new criterion for design of S-boxes in block ciphers: they should not be describable by a system of polynomial equations that is too small or too overdefined.