887 resultados para nonparametric demand model


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Scientists need to transfer semantically similar queries across multiple heterogeneous linked datasets. These queries may require data from different locations and the results are not simple to combine due to differences between datasets. A query model was developed to make it simple to distribute queries across different datasets using RDF as the result format. The query model, based on the concept of publicly recognised namespaces for parts of each scientific dataset, was implemented with a configuration that includes a large number of current biological and chemical datasets. The configuration is flexible, providing the ability to transparently use both private and public datasets in any query. A prototype implementation of the model was used to resolve queries for the Bio2RDF website, including both Bio2RDF datasets and other datasets that do not follow the Bio2RDF URI conventions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper attempts to develop a theoretical acceptance model for measuring Web personalization success. Key factors impacting Web personalization acceptance are identified from a detailed literature review. The final model is then cast in a structural equation modeling (SEM) framework comprising nineteen manifest variables, which are grouped into three focal behaviors of Web users. These variables could provide a framework for better understanding of numerous factors that contribute to the success measures of Web personalization technology. Especially, those concerning the quality of personalized features and how personalized information through personalized Website can be delivered to the user. The interrelationship between success constructs is also explained. Empirical validations of this theoretical model are expected on future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Creative Commons (CC) is often seen as a social movement, dismissed by critics as a tool for hobbyists or academics who do not sell their creations to make a living. However, this paper argues that the licensing of creative copyright works under a CC licence does not preclude commercial gain. If used wisely, CC licences can be a useful tool for creators in their quest for commercial success. In particular, this paper argues that the sharing of creative works online under a CC licence allows creators to circumvent traditional distribution channels dominated by content intermediaries, whilst maintaining a level of control over their copyright works (i.e. explicitly reserving some rights but not all rights). This will be illustrated by case studies on how CC is being used by content creators and intermediaries respectively, and how successful their respective methods are in harnessing this tool.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the past two decades there has been increasing interest in branding tourism destinations in an effort to meaningfully differentiate against a myriad of competing places that offer similar attractions and facilities. The academic literature relating to destination branding commenced only as recently as 1998, and there remains a dearth of empirical data that tests the effectiveness of brand campaigns, particularly in terms of enhancing destination loyalty. This paper reports the results of an investigation into destination brand loyalty for Australia as a long haul destination in a South American market. In spite of the high level of academic interest in the measurement of perceptions of destinations since the 1970s, few previous studies have examined perceptions held by South American consumers. Drawing on a model of consumer-based brand equity (CBBE), antecedents of destination brand loyalty was tested with data from a large Chilean sample of travelers, comprising a mix of previous visitors and non-visitors to Australia. Findings suggest that destination brand awareness, brand image, and brand value are positively related to brand loyalty for a long-haul destination. However, destination brand quality was not significantly related. The results also indicate that Australia is a more compelling destination brand for previous visitors compared to non-visitors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While Business Process Management (BPM) is an established discipline, the increased adoption of BPM technology in recent years has introduced new challenges. One challenge concerns dealing with the ever-growing complexity of business process models. Mechanisms for dealing with this complexity can be classified into two categories: 1) those that are solely concerned with the visual representation of the model and 2) those that change its inner structure. While significant attention is paid to the latter category in the BPM literature, this paper focuses on the former category. It presents a collection of patterns that generalize and conceptualize various existing mechanisms to change the visual representation of a process model. Next, it provides a detailed analysis of the degree of support for these patterns in a number of state-of-the-art languages and tools. This paper concludes with the results of a usability evaluation of the patterns conducted with BPM practitioners.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The modern society has come to expect the electrical energy on demand, while many of the facilities in power systems are aging beyond repair and maintenance. The risk of failure is increasing with the aging equipments and can pose serious consequences for continuity of electricity supply. As the equipments used in high voltage power networks are very expensive, economically it may not be feasible to purchase and store spares in a warehouse for extended periods of time. On the other hand, there is normally a significant time before receiving equipment once it is ordered. This situation has created a considerable interest in the evaluation and application of probability methods for aging plant and provisions of spares in bulk supply networks, and can be of particular importance for substations. Quantitative adequacy assessment of substation and sub-transmission power systems is generally done using a contingency enumeration approach which includes the evaluation of contingencies, classification of the contingencies based on selected failure criteria. The problem is very complex because of the need to include detailed modelling and operation of substation and sub-transmission equipment using network flow evaluation and to consider multiple levels of component failures. In this thesis a new model associated with aging equipment is developed to combine the standard tools of random failures, as well as specific model for aging failures. This technique is applied in this thesis to include and examine the impact of aging equipments on system reliability of bulk supply loads and consumers in distribution network for defined range of planning years. The power system risk indices depend on many factors such as the actual physical network configuration and operation, aging conditions of the equipment, and the relevant constraints. The impact and importance of equipment reliability on power system risk indices in a network with aging facilities contains valuable information for utilities to better understand network performance and the weak links in the system. In this thesis, algorithms are developed to measure the contribution of individual equipment to the power system risk indices, as part of the novel risk analysis tool. A new cost worth approach was developed in this thesis that can make an early decision in planning for replacement activities concerning non-repairable aging components, in order to maintain a system reliability performance which economically is acceptable. The concepts, techniques and procedures developed in this thesis are illustrated numerically using published test systems. It is believed that the methods and approaches presented, substantially improve the accuracy of risk predictions by explicit consideration of the effect of equipment entering a period of increased risk of a non-repairable failure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Erythromycin is the standard antibiotic used for treatment of Ureaplasma species during 3 pregnancy; however, maternally administered erythromycin may be ineffective at eliminating 4 intra-amniotic ureaplasma infections. We asked if erythromycin would eradicate intra-amniotic 5 ureaplasma infections in pregnant sheep. At 50 days of gestation (d, term=150d) pregnant ewes 6 received intra-amniotic injections of erythromycin-sensitive U. parvum serovar 3 (n=16) or 10B 7 medium (n=16). At 100d, amniocentesis was performed; five fetal losses (ureaplasma group: 8 n=4; 10B group: n=1) had occurred by this time. Remaining ewes were allocated into treatment 9 subgroups: medium only (M, n=7); medium and erythromycin (M/E, n=8); ureaplasma only (Up, 10 n=6) or ureaplasma and erythromycin (Up/E, n=6). Erythromycin was administered intra11 muscularly (500 mg), eight-hourly for four days (100d-104d). Amniotic fluid samples were 12 collected at 105d. At 125d preterm fetuses were surgically delivered and specimens were 13 collected for culture and histology. Erythromycin was quantified in amniotic fluid by liquid 14 chromatography-mass spectrometry. Ureaplasmas were isolated from the amniotic fluid, 15 chorioamnion and fetal lung of animals from the Up and Up/E groups, however, the numbers of 16 U. parvum recovered were not different between these groups. Inflammation in the 17 chorioamnion, cord and fetal lung was increased in ureaplasma-exposed animals compared to 18 controls, but was not different between the Up and Up/E groups. Erythromycin was detected in 19 amniotic fluid samples, although concentrations were low (<10-76 ng/mL). This study 20 demonstrates that maternally administered erythromycin does not eradicate chronic, intra- amniotic ureaplasma infections or improve fetal outcomes in an ovine model, potentially due to 22 the poor placental passage of erythromycin.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pipe insulation between the collector and storage tank on pumped storage (commonly called split), solar water heaters can be subject to high temperatures, with a maximum equal to the collector stagnation temperature. The frequency of occurrence of these temperatures is dependent on many factors including climate, hot water demand, system size and efficiency. This paper outlines the findings of a computer modelling study to quantify the frequency of occurrence of pipe temperatures of 80 degrees Celsius or greater at the outlet of the collectors for these systems. This study will help insulation suppliers determine the suitability of their materials for this application. The TRNSYS program was used to model the performance of a common size of domestic split solar system, using both flat plate and evacuated tube, selective surface collectors. Each system was modelled at a representative city in each of the 6 climate zones for Australia and New Zealand, according to AS/NZS4234 - Heat Water Systems - Calculation of energy consumption, and the ORER RECs calculation method. TRNSYS was used to predict the frequency of occurrence of the temperatures that the pipe insulation would be exposed to over an average year, for hot water consumption patterns specified in AS/NZS4234, and for worst case conditions in each of the climate zones. The results show; * For selectively surfaced, flat plate collectors in the hottest location (Alice Sprints) with a medium size hot water demand according to AS/NZS2434, the annual frequency of occurrence of temperatures at and above 80 degrees Celsius was 33 hours. The frequency of temperatures at and above 140 degrees Celsius was insignificant. * For evacuated tube collectors in the hottest location (Alice Springs), the annual frequency of temperatures at and above 80 degrees Celsius was 50 hours. Temperatures at and above 140 degrees Celsius were significant and were estimated to occur for more than 21 hours per year in this climate zone. Even in Melbourne, temperatures at and above 80 degrees can occur for 12 hours per year and at and above 140 degrees for 5 hours per year. * The worst case identified was for evacuated tube collectors in Alice Springs, with mostly afternoon loads in January. Under these conditions, the frequency of temperatures at and above 80 degrees Celsius was 10 hours for this month only. Temperatures at and above 140 degrees Celsius were predicted to occur for 5 hours in January.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The field of literacy studies has always been challenged by the changing technologies that humans have used to express, represent and communicate their feelings, ideas, understandings and knowledge. However, while the written word has remained central to literacy processes over a long period, it is generally accepted that there have been significant changes to what constitutes ‘literate’ practice. In particular, the status of the printed word has been challenged by the increasing dominance of the image, along with the multimodal meaning-making systems facilitated by digital media. For example, Gunther Kress and other members of the New London Group have argued that the second half of the twentieth century saw a significant cultural shift from the linguistic to the visual as the dominant semiotic mode. This in turn, they suggest, was accompanied by a cultural shift ‘from page to screen’ as a dominant space of representation (e.g. Cope & Kalantzis, 2000; Kress, 2003; New London Group, 1996). In a similar vein, Bill Green has noted that we have witnessed a shift from the regime of the print apparatus to a regime of the digital electronic apparatus (Lankshear, Snyder and Green, 2000). For these reasons, the field of literacy education has been challenged to find new ways to conceptualise what is meant by ‘literacy’ in the twenty first century and to rethink the conditions under which children might best be taught to be fully literate so that they can operate with agency in today’s world.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Australia as far back as 1993, researchers such as Baladin and Chapmen reported that "18% of the total Australian population and 51% of the population over 60 years of age were identified as having a disability" (2001; p38.2). Statistics such as these are not by any means astonishing, even to members of the general public, and it is widely understood that these are only to increase significantly in our near future. What is particularly surprising however is, in the face of such statistics, the lack of new and creative responses to this demographic shift, particularly by the architecture and construction industries. The common response from a range of sectors seems to be the repetition of a series of models which offer limited, and often undesirable, housing options. It is this against this backdrop, characterized by a lack of original options from mainstream practitioners and relevant government bodies, that the need has arisen to develop alternative models at grass-roots level. This paper reports primarily on the work of one group comprising a not-for-profit organization, a pro-bono design practice group and a local university working together to design a more holistic, emotionally sustainable independent living model of housing for families where a member of the family has a disability. This approach recognizes the limitations of universal design in that it often does not " ... meet all the housing needs that arise for people with moderate to severe disabilities" (Scotts, Margie et al, 2007; p.17). It is hoped that by examining the work of such a collective which is not driven by profit or policy, but rather born with the aim to address first and foremost individual and community need, that better insight can be gained into the real requirements of individuals and families as well as open up a view to new ways of fulfilling them.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Simulink Matlab control system of a heavy vehicle suspension has been developed. The aim of the exercise presented in this paper was to develop a Simulink Matlab control system of a heavy vehicle suspension. The objective facilitated by this outcome was the use of a working model of a heavy vehicle (HV) suspension that could be used for future research. A working computer model is easier and cheaper to re-configure than a HV axle group installed on a truck; it presents less risk should something go wrong and allows more scope for variation and sensitivity analysis before embarking on further "real-world" testing. Empirical data recorded as the input and output signals of a heavy vehicle (HV) suspension were used to develop the parameters for computer simulation of a linear time invariant system described by a second-order differential equation of the form: (i.e. a "2nd-order" system). Using the empirical data as an input to the computer model allowed validation of its output compared with the empirical data. The errors ranged from less than 1% to approximately 3% for any parameter, when comparing like-for-like inputs and outputs. The model is presented along with the results of the validation. This model will be used in future research in the QUT/Main Roads project Heavy vehicle suspensions – testing and analysis, particularly so for a theoretical model of a multi-axle HV suspension with varying values of dynamic load sharing. Allowance will need to be made for the errors noted when using the computer models in this future work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Grafting of autologous hyaline cartilage and bone for articular cartilage repair is a well-accepted technique. Although encouraging midterm clinical results have been reported, no information on the mechanical competence of the transplanted joint surface is available. HYPOTHESIS: The mechanical competence of osteochondral autografts is maintained after transplantation. STUDY DESIGN: Controlled laboratory study. METHODS: Osteochondral defects were filled with autografts (7.45 mm in diameter) in one femoral condyle in 12 mature sheep. The ipsilateral femoral condyle served as the donor site, and the resulting defect (8.3 mm in diameter) was left empty. The repair response was examined after 3 and 6 months with mechanical and histologic assessment and histomorphometric techniques. RESULTS: Good surface congruity and plug placement was achieved. The Young modulus of the grafted cartilage significantly dropped to 57.5% of healthy tissue after 3 months (P < .05) but then recovered to 82.2% after 6 months. The aggregate and dynamic moduli behaved similarly. The graft edges showed fibrillation and, in some cases (4 of 6), hypercellularity and chondrocyte clustering. Subchondral bone sclerosis was observed in 8 of 12 cases, and the amount of mineralized bone in the graft area increased from 40% to 61%. CONCLUSIONS: The mechanical quality of transplanted cartilage varies considerably over a short period of time, potentially reflecting both degenerative and regenerative processes, while histologically signs of both cartilage and bone degeneration occur. CLINICAL RELEVANCE: Both the mechanically degenerative and restorative processes illustrate the complex progression of regeneration after osteochondral transplantation. The histologic evidence raises doubts as to the long-term durability of the osteochondral repair.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The formation of new blood vessels is a prerequisite for bone healing. CYR61 (CCN1), an extracellular matrix-associated signaling protein, is a potent stimulator of angiogenesis and mesenchymal stem cell expansion and differentiation. A recent study showed that CYR61 is expressed during fracture healing and suggested that CYR61 plays a significant role in cartilage and bone formation. The hypothesis of the present study was that decreased fixation stability, which leads to a delay in healing, would lead to reduced CYR61 protein expression in fracture callus. The aim of the study was to quantitatively analyze CYR61 protein expression, vascularization, and tissue differentiation in the osteotomy gap and relate to the mechanical fixation stability during the course of healing. A mid-shaft osteotomy of the tibia was performed in two groups of sheep and stabilized with either a rigid or semirigid external fixator, each allowing different amounts of interfragmentary movement. The sheep were sacrificed at 2, 3, 6, and 9 weeks postoperatively. The tibiae were tested biomechanically and histological sections from the callus were analyzed immunohistochemically with regard to CYR61 protein expression and vascularization. Expression of CYR61 protein was upregulated at the early phase of fracture healing (2 weeks), decreasing over the healing time. Decreased fixation stability was associated with a reduced upregulation of the CYR61 protein expression and a reduced vascularization at 2 weeks leading to a slower healing. The maximum cartilage callus fraction in both groups was reached at 3 weeks. However, the semirigid fixator group showed a significantly lower CYR61 immunoreactivity in cartilage than the rigid fixator group at this time point. The fraction of cartilage in the semirigid fixator group was not replaced by bone as quickly as in the rigid fixator group leading to an inferior histological and mechanical callus quality at 6 weeks and therefore to a slower healing. The results supply further evidence that CYR61 may serve as an important regulator of bone healing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although player enjoyment is central to computer games, there is currently no accepted model of player enjoyment in games. There are many heuristics in the literature, based on elements such as the game interface, mechanics, gameplay, and narrative. However, there is a need to integrate these heuristics into a validated model that can be used to design, evaluate, and understand enjoyment in games. We have drawn together the various heuristics into a concise model of enjoyment in games that is structured by flow. Flow, a widely accepted model of enjoyment, includes eight elements that, we found, encompass the various heuristics from the literature. Our new model, GameFlow, consists of eight elements -- concentration, challenge, skills, control, clear goals, feedback, immersion, and social interaction. Each element includes a set of criteria for achieving enjoyment in games. An initial investigation and validation of the GameFlow model was carried out by conducting expert reviews of two real-time strategy games, one high-rating and one low-rating, using the GameFlow criteria. The result was a deeper understanding of enjoyment in real-time strategy games and the identification of the strengths and weaknesses of the GameFlow model as an evaluation tool. The GameFlow criteria were able to successfully distinguish between the high-rated and low-rated games and identify why one succeeded and the other failed. We concluded that the GameFlow model can be used in its current form to review games; further work will provide tools for designing and evaluating enjoyment in games.