99 resultados para average complexity
Resumo:
Project Management (PM) as an academic field is relatively new in Australian universities. Moreover, the field is distributed across four main areas: business (management), built environment and construction, engineering and more recently ICT (information systems). At an institutional level, with notable exceptions, there is little engagement between researchers working in those individual areas. Consequently, an initiative was launched in 2009 to create a network of PM researchers to build a disciplinary base for PM in Australia. The initiative took the form of a bi-annual forum. The first forum established the constituency and spread of PM research in Australia (Sense et al., 2011). This special issue of IJPM arose out of the second forum, held in 2012, that explored the notion of an Australian perspective on PM. At the forum, researchers were invited to collaborate to explore issues, methodological approaches, and theoretical positions underpinning their research and to answer the question: is there a distinctly Australian research agenda which responds to the current challenges of large and complex projects in our region? From a research point of view, it was abundantly clear at the forum that many of the issues facing Australian researchers are shared around the world. However, what emerged from the forum as the Australian perspective was a set of themes and research issues that dominate the Australia research agenda.
Resumo:
In this paper, a polynomial time algorithm is presented for solving the Eden problem for graph cellular automata. The algorithm is based on our neighborhood elimination operation which removes local neighborhood configurations which cannot be used in a pre-image of a given configuration. This paper presents a detailed derivation of our algorithm from first principles, and a detailed complexity and accuracy analysis is also given. In the case of time complexity, it is shown that the average case time complexity of the algorithm is \Theta(n^2), and the best and worst cases are \Omega(n) and O(n^3) respectively. This represents a vast improvement in the upper bound over current methods, without compromising average case performance.
Resumo:
With unpredictable workloads and a need for a multitude of specialized skills, many main contractors rely heavily on subcontracting to reduce their risks (Bresnen et al., 1985; Beardsworth et al., 1988). This is especially the case In Hong Kong, where the average direct labour content accounts for only around 1% of the total contract sum (Lai, 1987). Extensive usage of subcontracting is also reported in many other countries, including the UK (Gray and Flanagan, 1989) and Japan (Bennett et al., 1987). In addition, and depending upon the scale and complexity of works, it is not uncommon for subcontractors to further sublet their works to lower tier(s) subcontractors. Richter and Mitchell (1982) argued that main contractors can obtain a higher profit margin by reducing their performance costs by subcontracting work to those who have the necessary resources to perform the work more efficiently and economically. Subcontracting is also used strategically to allow firms to employ a minimum work force under fluctuating demand (Usdiken and Sözen, 1985). Through subcontracting, the risks of main contractors are also reduced, as errors in estimating or additional costs caused by delays or extra labour requirements can be absorbed by the subcontractors involved (Woon and Ofori, 2000). Despite these benefits, the quality of work can suffer when incapable or inexperienced subcontractors are employed. Additional problems also exist in the form of bid shopping, unclear accountability, and high fragmentation (Palaneeswaran et al., 2002). A recent CIB TG 23 International Conference, October 2003, Hong Kong report produced by the Hong Kong Construction Industry Review Committee (CIRC) points to development of a framework to help distinguish between capable and incapable subcontractors (Tang, 2001). This paper describes research aims at identifying and prioritising criteria for use in such a framework.
Resumo:
Lens average and equivalent refractive indices are required for purposes such as lens thickness estimation and optical modeling. We modeled the refractive index gradient as a power function of the normalized distance from lens center. Average index along the lens axis was estimated by integration. Equivalent index was estimated by raytracing through a model eye to establish ocular refraction, and then backward raytracing to determine the constant refractive index yielding the same refraction. Assuming center and edge indices remained constant with age, at 1.415 and 1.37 respectively, average axial refractive index increased (1.408 to 1.411) and equivalent index decreased (1.425 to 1.420) with age increase from 20 to 70 years. These values agree well with experimental estimates based on different techniques, although the latter show considerable scatter. The simple model of index gradient gives reasonable estimates of average and equivalent lens indices, although refinements in modeling and measurements are required.
Resumo:
Using a quasi-natural voting experiment encompassing a 160-year period (1848–2009) in Switzerland, we investigate whether a higher level of complexity leads to increased reliance on trusted parliamentary representatives. We find that when more referenda are held on the same day, constituents are more likely to refer to parliamentary recommendations when making their decisions. This finding holds true even when we narrow our focus to referenda with a relatively lower voter turnout on days on which more than one referendum is held. We also demonstrate that when constituents face a higher level of complexity, they follow the parliamentary recommendations rather than those of interest groups. "Viewed as a geometric figure, the ant’s path is irregular, complex, hard to describe. But its complexity is really a complexity in the surface of the beach, not a complexity in the ant." ([1] p. 51)
Resumo:
Objectives The intent of this paper is in the examination of health IT implementation processes – the barriers to and facilitators of successful implementation, identification of a beginning set of implementation best practices, the identification of gaps in the health IT implementation body of knowledge, and recommendations for future study and application. Methods A literature review resulted in the identification of six health IT related implementation best practices which were subsequently debated and clarified by participants attending the NI2012 Research Post Conference held in Montreal in the summer of 2012. Using the framework for implementation research (CFIR) to guide their application, the six best practices were applied to two distinct health IT implementation studies to assess their applicability. Results Assessing the implementation processes from two markedly diverse settings illustrated both the challenges and potentials of using standardized implementation processes. In support of what was discovered in the review of the literature, “one size fits all” in health IT implementation is a fallacy, particularly when global diversity is added into the mix. At the same time, several frameworks show promise for use as “scaffolding” to begin to assess best practices, their distinct dimensions, and their applicability for use. Conclusions Health IT innovations, regardless of the implementation setting, requires a close assessment of many dimensions. While there is no “one size fits all”, there are commonalities and best practices that can be blended, adapted, and utilized to improve the process of implementation. This paper examines health IT implementation processes and identifies a beginning set of implementation best practices, which could begin to address gaps in the health IT implementation body of knowledge.
Resumo:
Organizational transformations reliant on successful ICT system developments (continue to) fail to deliver projected benefits even when contemporary governance models are applied rigorously. Modifications to traditional program, project and systems development management methods have produced little material improvement to successful transformation as they are unable to routinely address the complexity and uncertainty of dynamic alignment of IS investments and innovation. Complexity theory provides insight into why this phenomenon occurs and is used to develop a conceptualization of complexity in IS-driven organizational transformations. This research-in-progress aims to identify complexity formulations relevant to organizational transformation. Political/power based influences, interrelated business rules, socio-technical innovation, impacts on stakeholders and emergent behaviors are commonly considered as characterizing complexity while the proposed conceptualization accommodates these as connectivity, irreducibility, entropy and/or information gain in hierarchically approximation and scaling, number of states in a finite automata and/or dimension of attractor, and information and/or variety.
Resumo:
Cryptosystems based on the hardness of lattice problems have recently acquired much importance due to their average-case to worst-case equivalence, their conjectured resistance to quantum cryptanalysis, their ease of implementation and increasing practicality, and, lately, their promising potential as a platform for constructing advanced functionalities. In this work, we construct “Fuzzy” Identity Based Encryption from the hardness of the Learning With Errors (LWE) problem. We note that for our parameters, the underlying lattice problems (such as gapSVP or SIVP) are assumed to be hard to approximate within supexponential factors for adversaries running in subexponential time. We give CPA and CCA secure variants of our construction, for small and large universes of attributes. All our constructions are secure against selective-identity attacks in the standard model. Our construction is made possible by observing certain special properties that secret sharing schemes need to satisfy in order to be useful for Fuzzy IBE. We also discuss some obstacles towards realizing lattice-based attribute-based encryption (ABE).
Resumo:
This paper contributes to conversations about the funding and quality of education research. The paper proceeds in two parts. Part I sets the context by presenting an historical analysis of funding allocations made to Education research through the ARC’s Discovery projects scheme between the years 2002 and 2014, and compares these trends to allocations made to another field within the Social, Behavioural and Economic Sciences assessment panel: Psychology and Cognitive Science. Part II highlights the consequences of underfunding education research by presenting evidence from an Australian Research Council Discovery project that is tracking the experiences of disaffected students who are referred to behaviour schools. The re-scoping decisions that became necessary and the incidental costs that accrue from complications that occur in the field are illustrated and discussed through vignettes of research with “ghosts” who don’t like school but who do like lollies, chess and Lego.
Resumo:
Business processes are an important instrument for understanding and improving how companies provide goods and services to customers. Therefore, many companies have documented their business processes well, often in the Event-driven Process Chains (EPC). Unfortunately, in many cases the resulting EPCs are rather complex, so that the overall process logic is hidden in low level process details. This paper proposes abstraction mechanisms for process models that aim to reduce their complexity, while keeping the overall process structure. We assume that functions are marked with efforts and splits are marked with probabilities. This information is used to separate important process parts from less important ones. Real world process models are used to validate the approach.
A low-complexity flight controller for Unmanned Aircraft Systems with constrained control allocation
Resumo:
In this paper, we propose a framework for joint allocation and constrained control design of flight controllers for Unmanned Aircraft Systems (UAS). The actuator configuration is used to map actuator constraint set into the space of the aircraft generalised forces. By constraining the demanded generalised forces, we ensure that the allocation problem is always feasible; and therefore, it can be solved without constraints. This leads to an allocation problem that does not require on-line numerical optimisation. Furthermore, since the controller handles the constraints, and there is no need to implement heuristics to inform the controller about actuator saturation. The latter is fundamental for avoiding Pilot Induced Oscillations (PIO) in remotely operated UAS due to the rate limit on the aircraft control surfaces.
Resumo:
The planning of IMRT treatments requires a compromise between dose conformity (complexity) and deliverability. This study investigates established and novel treatment complexity metrics for 122 IMRT beams from prostate treatment plans. The Treatment and Dose Assessor software was used to extract the necessary data from exported treatment plan files and calculate the metrics. For most of the metrics, there was strong overlap between the calculated values for plans that passed and failed their quality assurance (QA) tests. However, statistically significant variation between plans that passed and failed QA measurements was found for the established modulation index and for a novel metric describing the proportion of small apertures in each beam. The ‘small aperture score’ provided threshold values which successfully distinguished deliverable treatment plans from plans that did not pass QA, with a low false negative rate.
Resumo:
Introduction Given the known challenges of obtaining accurate measurements of small radiation fields, and the increasing use of small field segments in IMRT beams, this study examined the possible effects of referencing inaccurate field output factors in the planning of IMRT treatments. Methods This study used the Brainlab iPlan treatment planning system to devise IMRT treatment plans for delivery using the Brainlab m3 microMLC (Brainlab, Feldkirchen, Germany). Four pairs of sample IMRT treatments were planned using volumes, beams and prescriptions that were based on a set of test plans described in AAPM TG 119’s recommendations for the commissioning of IMRT treatment planning systems [1]: • C1, a set of three 4 cm volumes with different prescription doses, was modified to reduce the size of the PTV to 2 cm across and to include an OAR dose constraint for one of the other volumes. • C2, a prostate treatment, was planned as described by the TG 119 report [1]. • C3, a head-and-neck treatment with a PTV larger than 10 cm across, was excluded from the study. • C4, an 8 cm long C-shaped PTV surrounding a cylindrical OAR, was planned as described in the TG 119 report [1] and then replanned with the length of the PTV reduced to 4 cm. Both plans in each pair used the same beam angles, collimator angles, dose reference points, prescriptions and constraints. However, one of each pair of plans had its beam modulation optimisation and dose calculation completed with reference to existing iPlan beam data and the other had its beam modulation optimisation and dose calculation completed with reference to revised beam data. The beam data revisions consisted of increasing the field output factor for a 0.6 9 0.6 cm2 field by 17 % and increasing the field output factor for a 1.2 9 1.2 cm2 field by 3 %. Results The use of different beam data resulted in different optimisation results with different microMLC apertures and segment weightings between the two plans for each treatment, which led to large differences (up to 30 % with an average of 5 %) between reference point doses in each pair of plans. These point dose differences are more indicative of the modulation of the plans than of any clinically relevant changes to the overall PTV or OAR doses. By contrast, the maximum, minimum and mean doses to the PTVs and OARs were smaller (less than 1 %, for all beams in three out of four pairs of treatment plans) but are more clinically important. Of the four test cases, only the shortened (4 cm) version of TG 119’s C4 plan showed substantial differences between the overall doses calculated in the volumes of interest using the different sets of beam data and thereby suggested that treatment doses could be affected by changes to small field output factors. An analysis of the complexity of this pair of plans, using Crowe et al.’s TADA code [2], indicated that iPlan’s optimiser had produced IMRT segments comprised of larger numbers of small microMLC leaf separations than in the other three test cases. Conclusion: The use of altered small field output factors can result in substantially altered doses when large numbers of small leaf apertures are used to modulate the beams, even when treating relatively large volumes.
Resumo:
Mammographic density (MD) adjusted for age and body mass index (BMI) is a strong heritable breast cancer risk factor; however, its biological basis remains elusive. Previous studies assessed MD-associated histology using random sampling approaches, despite evidence that high and low MD areas exist within a breast and are negatively correlated with respect to one another. We have used an image-guided approach to sample high and low MD tissues from within individual breasts to examine the relationship between histology and degree of MD. Image-guided sampling was performed using two different methodologies on mastectomy tissues (n = 12): (1) sampling of high and low MD regions within a slice guided by bright (high MD) and dark (low MD) areas in a slice X-ray film; (2) sampling of high and low MD regions within a whole breast using a stereotactically guided vacuum-assisted core biopsy technique. Pairwise analysis accounting for potential confounders (i.e. age, BMI, menopausal status, etc.) provides appropriate power for analysis despite the small sample size. High MD tissues had higher stromal (P = 0.002) and lower fat (P = 0.002) compositions, but no evidence of difference in glandular areas (P = 0.084) compared to low MD tissues from the same breast. High MD regions had higher relative gland counts (P = 0.023), and a preponderance of Type I lobules in high MD compared to low MD regions was observed in 58% of subjects (n = 7), but did not achieve significance. These findings clarify the histologic nature of high MD tissue and support hypotheses regarding the biophysical impact of dense connective tissue on mammary malignancy. They also provide important terms of reference for ongoing analyses of the underlying genetics of MD.