895 resultados para Generalized extreme value distribution


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This series of research vignettes is aimed at sharing current and interesting research findings from our team and other international researchers. In this vignette, Dr Martie-Louise Verreynne from the University of Queensland Business School summaries the findings from a paper written in conjunction with Sarel Gronum and Tim Kastelle from the UQ Business School that examined if networking really contributes to small firms' bottom line. Their findings show that unless networks are used for productive means, efforts to cultivate and maintain them may be wasteful.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Organizations today make radical use of the IT resources to sustain or better their existing competitive position. One such initiative is forming alliances on a shared IT backbone with partners of their value chain. We term these alliances the collaborative organizational structures (COS). Regardless of the nature of engagement with IT resources, organizations will require unique competencies to obtain performance-differentiating value from these IT resources. In a collaborative environment, these competencies would be a result of the synergy between the alliances’ unique competences. We call these the inter-firm IT-related capabilities. The resource centric theoretical frameworks suggest a trajectory of competence development and the structure of inter-firm competencies, but does not inform on the nature of these competencies. We employ an interpretive design to suggest three inter-firm IT-related capabilities for IT-backed collaborative alliances. We discuss these capabilities in this research and suggest that their effectiveness be measured directly against the collaborative rent, and indirectly against the firm-level performance of the alliance partners. This forms a model of leveraging and evaluating value within IT-backed collaborative alliances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background The four principles of Beauchamp and Childress - autonomy, non-maleficence, beneficence and justice - have been extremely influential in the field of medical ethics, and are fundamental for understanding the current approach to ethical assessment in health care. This study tests whether these principles can be quantitatively measured on an individual level, and then subsequently if they are used in the decision making process when individuals are faced with ethical dilemmas. Methods The Analytic Hierarchy Process was used as a tool for the measurement of the principles. Four scenarios, which involved conflicts between the medical ethical principles, were presented to participants and they made judgments about the ethicality of the action in the scenario, and their intentions to act in the same manner if they were in the situation. Results Individual preferences for these medical ethical principles can be measured using the Analytic Hierarchy Process. This technique provides a useful tool in which to highlight individual medical ethical values. On average individuals have a significant preference for non-maleficence over the other principles, however, and perhaps counter-intuitively, this preference does not seem to relate to applied ethical judgements in specific ethical dilemmas. Conclusions People state they value these medical ethical principles but they do not actually seem to use them directly in the decision making process. The reasons for this are explained through the lack of a behavioural model to account for the relevant situational factors not captured by the principles. The limitations of the principles in predicting ethical decision making are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The opening phrase of the title is from Charles Darwin’s notebooks (Schweber 1977). It is a double reminder, firstly that mainstream evolutionary theory is not just about describing nature but is particularly looking for mechanisms or ‘causes’, and secondly, that there will usually be several causes affecting any particular outcome. The second part of the title is our concern at the almost universal rejection of the idea that biological mechanisms are sufficient for macroevolutionary changes, thus rejecting a cornerstone of Darwinian evolutionary theory. Our primary aim here is to consider ways of making it easier to develop and to test hypotheses about evolution. Formalizing hypotheses can help generate tests. In an absolute sense, some of the discussion by scientists about evolution is little better than the lack of reasoning used by those advocating intelligent design. Our discussion here is in a Popperian framework where science is defined by that area of study where it is possible, in principle, to find evidence against hypotheses – they are in principle falsifiable. However, with time, the boundaries of science keep expanding. In the past, some aspects of evolution were outside the current boundaries of falsifiable science, but increasingly new techniques and ideas are expanding the boundaries of science and it is appropriate to re-examine some topics. It often appears that over the last few decades there has been an increasingly strong assumption to look first (and only) for a physical cause. This decision is virtually never formally discussed, just an assumption is made that some physical factor ‘drives’ evolution. It is necessary to examine our assumptions much more carefully. What is meant by physical factors ‘driving’ evolution, or what is an ‘explosive radiation’. Our discussion focuses on two of the six mass extinctions, the fifth being events in the Late Cretaceous, and the sixth starting at least 50,000 years ago (and is ongoing). Cretaceous/Tertiary boundary; the rise of birds and mammals. We have had a long-term interest (Cooper and Penny 1997) in designing tests to help evaluate whether the processes of microevolution are sufficient to explain macroevolution. The real challenge is to formulate hypotheses in a testable way. For example the numbers of lineages of birds and mammals that survive from the Cretaceous to the present is one test. Our first estimate was 22 for birds, and current work is tending to increase this value. This still does not consider lineages that survived into the Tertiary, and then went extinct later. Our initial suggestion was probably too narrow in that it lumped four models from Penny and Phillips (2004) into one model. This reduction is too simplistic in that we need to know about survival and ecological and morphological divergences during the Late Cretaceous, and whether Crown groups of avian or mammalian orders may have existed back into the Cretaceous. More recently (Penny and Phillips 2004) we have formalized hypotheses about dinosaurs and pterosaurs, with the prediction that interactions between mammals (and groundfeeding birds) and dinosaurs would be most likely to affect the smallest dinosaurs, and similarly interactions between birds and pterosaurs would particularly affect the smaller pterosaurs. There is now evidence for both classes of interactions, with the smallest dinosaurs and pterosaurs declining first, as predicted. Thus, testable models are now possible. Mass extinction number six: human impacts. On a broad scale, there is a good correlation between time of human arrival, and increased extinctions (Hurles et al. 2003; Martin 2005; Figure 1). However, it is necessary to distinguish different time scales (Penny 2005) and on a finer scale there are still large numbers of possibilities. In Hurles et al. (2003) we mentioned habitat modification (including the use of Geogenes III July 2006 31 fire), introduced plants and animals (including kiore) in addition to direct predation (the ‘overkill’ hypothesis). We need also to consider prey switching that occurs in early human societies, as evidenced by the results of Wragg (1995) on the middens of different ages on Henderson Island in the Pitcairn group. In addition, the presence of human-wary or humanadapted animals will affect the distribution in the subfossil record. A better understanding of human impacts world-wide, in conjunction with pre-scientific knowledge will make it easier to discuss the issues by removing ‘blame’. While continued spontaneous generation was accepted universally, there was the expectation that animals continued to reappear. New Zealand is one of the very best locations in the world to study many of these issues. Apart from the marine fossil record, some human impact events are extremely recent and the remains less disrupted by time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is no doubt that information technology (IT) resources are important for organisations in any jurisdiction to manage their processes. Organisations consume considerable financial resources to acquire and manage their IT resources with various IT governance structures. Investment in IT, thus, is a strategic necessity. IT resources, however, do not contribute fully to business value on their own. Business value considers performance impacts of resources at various organisational levels (e.g., processes and firm levels). ITs are biased resources in that they require some form of manipulation to attain their maximum value. While we know that IT resources are important, a deeper understanding on two aspects of use of IT resources in organisations is important. First, is how to leverage the IT resources to attain its maximum value, and second, is where to evaluate IT-related business value in the organisation’s value chain. This understanding is important for organisation to sustain their operations in an ever-changing business environment. We address these issues in two parts. This paper discusses the first aspect of ways in which organisations can create and sustain their IT-related business value.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A deeper understanding on two aspects of use of IT resources in organisations is important to ensure sustainable investment in these IT resources. The first is how to leverage the IT resources to attain its maximum value. We discussed this aspect of use of IT resources in part 1 of this series. This discussion suggested a complementary approach as a first stage of IT business value creation, and dynamic capabilities approach to secure sustainable IT-related business value from the IT resources. The second important aspect of IT business value is where to evaluate IT-related business value in the organisations value chains. This understanding is important for organisations to ensure appropriate accountability of the investment and management of IT resources. We address this issue in this second part of the two part series.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Voltage drop and rise at network peak and off–peak periods along with voltage unbalance are the major power quality problems in low voltage distribution networks. Usually, the utilities try to use adjusting the transformer tap changers as a solution for the voltage drop. They also try to distribute the loads equally as a solution for network voltage unbalance problem. On the other hand, the ever increasing energy demand, along with the necessity of cost reduction and higher reliability requirements, are driving the modern power systems towards Distributed Generation (DG) units. This can be in the form of small rooftop photovoltaic cells (PV), Plug–in Electric Vehicles (PEVs) or Micro Grids (MGs). Rooftop PVs, typically with power levels ranging from 1–5 kW installed by the householders are gaining popularity due to their financial benefits for the householders. Also PEVs will be soon emerged in residential distribution networks which behave as a huge residential load when they are being charged while in their later generation, they are also expected to support the network as small DG units which transfer the energy stored in their battery into grid. Furthermore, the MG which is a cluster of loads and several DG units such as diesel generators, PVs, fuel cells and batteries are recently introduced to distribution networks. The voltage unbalance in the network can be increased due to the uncertainties in the random connection point of the PVs and PEVs to the network, their nominal capacity and time of operation. Therefore, it is of high interest to investigate the voltage unbalance in these networks as the result of MGs, PVs and PEVs integration to low voltage networks. In addition, the network might experience non–standard voltage drop due to high penetration of PEVs, being charged at night periods, or non–standard voltage rise due to high penetration of PVs and PEVs generating electricity back into the grid in the network off–peak periods. In this thesis, a voltage unbalance sensitivity analysis and stochastic evaluation is carried out for PVs installed by the householders versus their installation point, their nominal capacity and penetration level as different uncertainties. A similar analysis is carried out for PEVs penetration in the network working in two different modes: Grid to vehicle and Vehicle to grid. Furthermore, the conventional methods are discussed for improving the voltage unbalance within these networks. This is later continued by proposing new and efficient improvement methods for voltage profile improvement at network peak and off–peak periods and voltage unbalance reduction. In addition, voltage unbalance reduction is investigated for MGs and new improvement methods are proposed and applied for the MG test bed, planned to be established at Queensland University of Technology (QUT). MATLAB and PSCAD/EMTDC simulation softwares are used for verification of the analyses and the proposals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Utilizing a mono-specific antiserum produced in rabbits to hog kidney aromatic L-amino acid decarboxylase (AADC), the enzyme was localized in rat kidney by immunoperoxidase staining. AADC was located predominantly in the proximal convoluted tubules; there was also weak staining in the distal convoluted tubules and collecting ducts. An increase in dietary potassium or sodium intake produced no change in density or distribution of AADC staining in kidney. An assay of AADC enzyme activity showed no difference in cortex or medulla with chronic potassium loading. A change in distribution or activity of renal AADC does not explain the postulated dopaminergic modulation of renal function that occurs with potassium or sodium loading.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A total histological grade does not necessarily distinguish between different manifestations of cartilage damage or degeneration. An accurate and reliable histological assessment method is required to separate normal and pathological tissue within a joint during treatment of degenerative joint conditions and to sub-classify the latter in meaningful ways. The Modified Mankin method may be adaptable for this purpose. We investigated how much detail may be lost by assigning one composite score/grade to represent different degenerative components of the osteoarthritic condition. We used four ovine injury models (sham surgery, anterior cruciate ligament/medial collateral ligament instability, simulated anatomic anterior cruciate ligament reconstruction and meniscal removal) to induce different degrees and potentially 'types' (mechanisms) of osteoarthritis. Articular cartilage was systematically harvested, prepared for histological examination and graded in a blinded fashion using a Modified Mankin grading method. Results showed that the possible permutations of cartilage damage were significant and far more varied than the current intended use that histological grading systems allow. Of 1352 cartilage specimens graded, 234 different manifestations of potential histological damage were observed across 23 potential individual grades of the Modified Mankin grading method. The results presented here show that current composite histological grading may contain additional information that could potentially discern different stages or mechanisms of cartilage damage and degeneration in a sheep model. This approach may be applicable to other grading systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Generalized fractional partial differential equations have now found wide application for describing important physical phenomena, such as subdiffusive and superdiffusive processes. However, studies of generalized multi-term time and space fractional partial differential equations are still under development. In this paper, the multi-term time-space Caputo-Riesz fractional advection diffusion equations (MT-TSCR-FADE) with Dirichlet nonhomogeneous boundary conditions are considered. The multi-term time-fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0, 1], [1, 2] and [0, 2], respectively. These are called respectively the multi-term time-fractional diffusion terms, the multi-term time-fractional wave terms and the multi-term time-fractional mixed diffusion-wave terms. The space fractional derivatives are defined as Riesz fractional derivatives. Analytical solutions of three types of the MT-TSCR-FADE are derived with Dirichlet boundary conditions. By using Luchko's Theorem (Acta Math. Vietnam., 1999), we proposed some new techniques, such as a spectral representation of the fractional Laplacian operator and the equivalent relationship between fractional Laplacian operator and Riesz fractional derivative, that enabled the derivation of the analytical solutions for the multi-term time-space Caputo-Riesz fractional advection-diffusion equations. © 2012.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The potential of multiple distribution static synchronous compensators (DSTATCOMs) to improve the voltage profile of radial distribution networks has been reported in the literature by few authors. However, the operation of multiple DSTATCOMs across a distribution feeder may introduce control interactions and/or voltage instability. This study proposes a control scheme that alleviates interactions among controllers and enhances proper reactive power sharing among DSTATCOMs. A generalised mathematical model is presented to analyse the interactions among any number of DSTATCOMs in the network. The criterion for controller design is developed by conducting eigenvalue analysis on this mathematical model. The proposed control scheme is tested in time domain on a sample radial distribution feeder installed with multiple DSTATCOMs and test results are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Key establishment is a crucial primitive for building secure channels in a multi-party setting. Without quantum mechanics, key establishment can only be done under the assumption that some computational problem is hard. Since digital communication can be easily eavesdropped and recorded, it is important to consider the secrecy of information anticipating future algorithmic and computational discoveries which could break the secrecy of past keys, violating the secrecy of the confidential channel. Quantum key distribution (QKD) can be used generate secret keys that are secure against any future algorithmic or computational improvements. QKD protocols still require authentication of classical communication, although existing security proofs of QKD typically assume idealized authentication. It is generally considered folklore that QKD when used with computationally secure authentication is still secure against an unbounded adversary, provided the adversary did not break the authentication during the run of the protocol. We describe a security model for quantum key distribution extending classical authenticated key exchange (AKE) security models. Using our model, we characterize the long-term security of the BB84 QKD protocol with computationally secure authentication against an eventually unbounded adversary. By basing our model on traditional AKE models, we can more readily compare the relative merits of various forms of QKD and existing classical AKE protocols. This comparison illustrates in which types of adversarial environments different quantum and classical key agreement protocols can be secure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper argues that the logic of neoliberal choice policy is typically blind to considerations of space and place, but inevitably impacts on rural and remote locations in the way that middle class professionals view the opportunities available in their local educational markets. The paper considers the value of middle class professionals’ educational capitals in regional communities and their problematic distribution, given that class fraction’s particular investment in choice strategies to ensure their children’s future. It then profiles the educational market in six communities along a transect between a major regional centre and a remote ‘outback’ town, using publicly available data from the Australian government’s ‘My School’ website. Comparison of the local markets shows how educational outcomes are distributed across the local markets and how dimensions of ‘choice’ thin out over the transect. Interview data offers insights into how professional families in these localities engage selectively with these local educational markets, or plan to transcend them. The discussion reflects on the growing importance of educational choices as a marker of place in the competition between localities to attract and retain professionals to staff vital human services in their communities.