336 resultados para best proximity point
Resumo:
In 2005, Stephen Abram, vice president of Innovation at SirsiDynix, challenged library and information science (LIS) professionals to start becoming “librarian 2.0.” In the last few years, discussion and debate about the “core competencies” needed by librarian 2.0 have appeared in the “biblioblogosphere” (blogs written by LIS professionals). However, beyond these informal blog discussions few systematic and empirically based studies have taken place. A project funded by the Australian Learning and Teaching Council fills this gap. The project identifies the key skills, knowledge, and attributes required by “librarian 2.0.” Eighty-one members of the Australian LIS profession participated in a series of focus groups. Eight themes emerged as being critical to “librarian 2.0”: technology, communication, teamwork, user focus, business savvy, evidence based practice, learning and education, and personal traits. Guided by these findings interviews with 36 LIS educators explored the current approaches used within contemporary LIS education to prepare graduates to become “librarian 2.0”. This video presents an example of ‘great practice’ in current LIS education as it strives to foster web 2.0 professionals.
Resumo:
In 2005, Stephen Abram, vice president of Innovation at SirsiDynix, challenged library and information science (LIS) professionals to start becoming “librarian 2.0.” In the last few years, discussion and debate about the “core competencies” needed by librarian 2.0 have appeared in the “biblioblogosphere” (blogs written by LIS professionals). However, beyond these informal blog discussions few systematic and empirically based studies have taken place. A project funded by the Australian Learning and Teaching Council fills this gap. The project identifies the key skills, knowledge, and attributes required by “librarian 2.0.” Eighty-one members of the Australian LIS profession participated in a series of focus groups. Eight themes emerged as being critical to “librarian 2.0”: technology, communication, teamwork, user focus, business savvy, evidence based practice, learning and education, and personal traits. Guided by these findings interviews with 36 LIS educators explored the current approaches used within contemporary LIS education to prepare graduates to become “librarian 2.0”. This video presents an example of ‘great practice’ in current LIS education as it strives to foster web 2.0 professionals.
Resumo:
In 2005, Stephen Abram, vice president of Innovation at SirsiDynix, challenged library and information science (LIS) professionals to start becoming “librarian 2.0.” In the last few years, discussion and debate about the “core competencies” needed by librarian 2.0 have appeared in the “biblioblogosphere” (blogs written by LIS professionals). However, beyond these informal blog discussions few systematic and empirically based studies have taken place. A project funded by the Australian Learning and Teaching Council fills this gap. The project identifies the key skills, knowledge, and attributes required by “librarian 2.0.” Eighty-one members of the Australian LIS profession participated in a series of focus groups. Eight themes emerged as being critical to “librarian 2.0”: technology, communication, teamwork, user focus, business savvy, evidence based practice, learning and education, and personal traits. Guided by these findings interviews with 36 LIS educators explored the current approaches used within contemporary LIS education to prepare graduates to become “librarian 2.0”. This video presents an example of ‘great practice’ in current LIS education as it strives to foster web 2.0 professionals.
Resumo:
Web service technology is increasingly being used to build various e-Applications, in domains such as e-Business and e-Science. Characteristic benefits of web service technology are its inter-operability, decoupling and just-in-time integration. Using web service technology, an e-Application can be implemented by web service composition — by composing existing individual web services in accordance with the business process of the application. This means the application is provided to customers in the form of a value-added composite web service. An important and challenging issue of web service composition, is how to meet Quality-of-Service (QoS) requirements. This includes customer focused elements such as response time, price, throughput and reliability as well as how to best provide QoS results for the composites. This in turn best fulfils customers’ expectations and achieves their satisfaction. Fulfilling these QoS requirements or addressing the QoS-aware web service composition problem is the focus of this project. From a computational point of view, QoS-aware web service composition can be transformed into diverse optimisation problems. These problems are characterised as complex, large-scale, highly constrained and multi-objective problems. We therefore use genetic algorithms (GAs) to address QoS-based service composition problems. More precisely, this study addresses three important subproblems of QoS-aware web service composition; QoS-based web service selection for a composite web service accommodating constraints on inter-service dependence and conflict, QoS-based resource allocation and scheduling for multiple composite services on hybrid clouds, and performance-driven composite service partitioning for decentralised execution. Based on operations research theory, we model the three problems as a constrained optimisation problem, a resource allocation and scheduling problem, and a graph partitioning problem, respectively. Then, we present novel GAs to address these problems. We also conduct experiments to evaluate the performance of the new GAs. Finally, verification experiments are performed to show the correctness of the GAs. The major outcomes from the first problem are three novel GAs: a penaltybased GA, a min-conflict hill-climbing repairing GA, and a hybrid GA. These GAs adopt different constraint handling strategies to handle constraints on interservice dependence and conflict. This is an important factor that has been largely ignored by existing algorithms that might lead to the generation of infeasible composite services. Experimental results demonstrate the effectiveness of our GAs for handling the QoS-based web service selection problem with constraints on inter-service dependence and conflict, as well as their better scalability than the existing integer programming-based method for large scale web service selection problems. The major outcomes from the second problem has resulted in two GAs; a random-key GA and a cooperative coevolutionary GA (CCGA). Experiments demonstrate the good scalability of the two algorithms. In particular, the CCGA scales well as the number of composite services involved in a problem increases, while no other algorithms demonstrate this ability. The findings from the third problem result in a novel GA for composite service partitioning for decentralised execution. Compared with existing heuristic algorithms, the new GA is more suitable for a large-scale composite web service program partitioning problems. In addition, the GA outperforms existing heuristic algorithms, generating a better deployment topology for a composite web service for decentralised execution. These effective and scalable GAs can be integrated into QoS-based management tools to facilitate the delivery of feasible, reliable and high quality composite web services.
Resumo:
This chapter will introduce Australia’s Dennis family – a case of ‘incremental entrepreneurship’ in the business transition from the first to the second generation. Following the second generation’s formal involvement and ownership in the business, Dennis Family Corporation (DFC) undertook a major professionalization process to formalize the family business and ensure its continued success. The members of the second generation have successfully sustained the entrepreneurial spirit of their family business (albeit in a different style), adding value to the firm in an ‘incremental’ manner. Throughout the chapter there will be a strong emphasis on the family element of DFC and the roles that each family member has played. Bert Dennis, as the founder and incumbent leader of the firm, has witnessed major changes to the business he built from the ground up. His children, in particular his son Grant Dennis as the primary next generation issue champion, have seen the changes from another perspective – ensuring the business remains within the family into the second generation and beyond. The professionalization process was sparked by a commitment from the second generation to continue to ‘make a real go’ of the family business rather than simply liquidating and distributing the assets. The dedication of all the family members to this objective has ensured the success of this process, and ultimately, the longevity of the firm. Although DFC has become more ‘professional’, it has not lost its entrepreneurial character; rather, it has improved the ways in which entrepreneurialism is fostered and pursued in the company. In essence, this case outlines how the implementation of appropriate governance and management practices has allowed the Dennis family to overcome the challenges and maximize the opportunities associated with owning and operating a multigenerational family fi rm. From a theoretical perspective, this case uses the concepts of entrepreneurial orientation (EO) (Lumpkin and Dess 1996) and the resource- based view (RBV) (Habbershon and Williams 1999; Barney 1991; Wernerfelt 1984) to demonstrate how the fi rm has leveraged its familiness to foster an enduring spirit of entrepreneurship and to maintain a sustained competitive advantage.
Resumo:
This paper formulates a node-based smoothed conforming point interpolation method (NS-CPIM) for solid mechanics. In the proposed NS-CPIM, the higher order conforming PIM shape functions (CPIM) have been constructed to produce a continuous and piecewise quadratic displacement field over the whole problem domain, whereby the smoothed strain field was obtained through smoothing operation over each smoothing domain associated with domain nodes. The smoothed Galerkin weak form was then developed to create the discretized system equations. Numerical studies have demonstrated the following good properties: NS-CPIM (1) can pass both standard and quadratic patch test; (2) provides an upper bound of strain energy; (3) avoid the volumetric locking; (4) provides the higher accuracy than those in the node-based smoothed schemes of the original PIMs.
Resumo:
In this article, an enriched radial point interpolation method (e-RPIM) is developed for computational mechanics. The conventional radial basis function (RBF) interpolation is novelly augmented by the suitable basis functions to reflect the natural properties of deformation. The performance of the enriched meshless RBF shape functions is first investigated using the surface fitting. The surface fitting results have proven that, compared with the conventional RBF, the enriched RBF interpolation has a much better accuracy to fit a complex surface than the conventional RBF interpolation. It has proven that the enriched RBF shape function will not only possess all advantages of the conventional RBF interpolation, but also can accurately reflect the deformation properties of problems. The system of equations for two-dimensional solids is then derived based on the enriched RBF shape function and both of the meshless strong-form and weak-form. A numerical example of a bar is presented to study the effectiveness and efficiency of e-RPIM. As an important application, the newly developed e-RPIM, which is augmented by selected trigonometric basis functions, is applied to crack problems. It has been demonstrated that the present e-RPIM is very accurate and stable for fracture mechanics problems.
Resumo:
The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.
Resumo:
This paper presents a comprehensive study to find the most efficient bitrate requirement to deliver mobile video that optimizes bandwidth, while at the same time maintains good user viewing experience. In the study, forty participants were asked to choose the lowest quality video that would still provide for a comfortable and long-term viewing experience, knowing that higher video quality is more expensive and bandwidth intensive. This paper proposes the lowest pleasing bitrates and corresponding encoding parameters for five different content types: cartoon, movie, music, news and sports. It also explores how the lowest pleasing quality is influenced by content type, image resolution, bitrate, and user gender, prior viewing experience, and preference. In addition, it analyzes the trajectory of users’ progression while selecting the lowest pleasing quality. The findings reveal that the lowest bitrate requirement for a pleasing viewing experience is much higher than that of the lowest acceptable quality. Users’ criteria for the lowest pleasing video quality are related to the video’s content features, as well as its usage purpose and the user’s personal preferences. These findings can provide video providers guidance on what quality they should offer to please mobile users.
Resumo:
Our cross-national field study of wine entrepreneurship in the “wrong” places provides some redress to the focus of the “regional advantage” literature on places that have already won and on the firms that benefit from “clusters” and other centers of industry advantage. Regional “disadvantage” is at best a shadowy afterthought to this literature. By poking around in these shadows, we help to synthesize and extend the incipient yet burgeoning literature on entrepreneurial “resourcefulness” and we contribute to the developing body of insights and theory pertinent to the numerous but often ignored firms and startups that mostly need to worry about how they will compete at all now if they are ever to have of chance of “winning” in the future. The core of our findings suggests that understandable – though contested – processes of ingenuity underlie entrepreneurial responses to regional disadvantage. Because we study entrepreneurship that from many angles simply does not make sense, we are also able to proffer a novel perspective on entrepreneurial sensemaking.
Resumo:
Organisations within the not-for-profit sector provide services to individuals and groups that government and for-profit organisations cannot or will not consider. The not-for-profit sector has come to be a vibrant and rich agglomeration of services and programs that operate under a myriad of philosophical stances, service orientation, client groupings and operational capacities. In Australia these organisations and services are providing social support and service assistance to many people in the community; often targeting their assistance to the most difficult of clients. Initially, in undertaking this role, the not-for-profit sector received limited sponsorship from government. Over time governments assumed greater responsibility in the form of service grants to particular groups: ‘the worthy poor’. More recently, they have entered into contractual service agreements with the not-for-profit sector, which specify the nature of the outcomes to be achieved and, to a degree, the way in which the services will be provided. A consequence of this growing shift to a more marketised model of service contracting, often offered-up under the label of enhanced collaborative practice, has been increased competitiveness between agencies that had previously worked well together (Keast and Brown, 2006). Another trend emerging from the market approach is the entrance of for-profit providers. These larger organisations have higher levels of organisational capacity with considerable organisational slack to allow them to adopt new service roles. Shaped almost as ‘shadow governments’ they appear to be a strong preference for governments looking for greater accountability of outcomes and an easier way to control the interaction with the conventional not-for-profit sector. The question is will governments’ apparent preference for larger organisational arrangements lead to the demise of the vibrancy of the not-for-profit sector and impact on service provision to those people who fall outside of the remit of the new service providers? To address this issue, this paper uses information gleaned from a state-wide survey of not-for-profit organisations in Queensland, Australia which included organisational size, operational scope, funding arrangements and governance/management approaches. Supplementing this information is qualitative data derived from 17 focus groups and 120 interviews conducted over ten years of study of this sector. The findings contribute to greater understanding of the practice and theory of the future provision of social services.
Resumo:
This paper gives an overview of the INEX 2009 Ad Hoc Track. The main goals of the Ad Hoc Track were three-fold. The first goal was to investigate the impact of the collection scale and markup, by using a new collection that is again based on a the Wikipedia but is over 4 times larger, with longer articles and additional semantic annotations. For this reason the Ad Hoc track tasks stayed unchanged, and the Thorough Task of INEX 2002–2006 returns. The second goal was to study the impact of more verbose queries on retrieval effectiveness, by using the available markup as structural constraints—now using both the Wikipedia’s layout-based markup, as well as the enriched semantic markup—and by the use of phrases. The third goal was to compare different result granularities by allowing systems to retrieve XML elements, ranges of XML elements, or arbitrary passages of text. This investigates the value of the internal document structure (as provided by the XML mark-up) for retrieving relevant information. The INEX 2009 Ad Hoc Track featured four tasks: For the Thorough Task a ranked-list of results (elements or passages) by estimated relevance was needed. For the Focused Task a ranked-list of non-overlapping results (elements or passages) was needed. For the Relevant in Context Task non-overlapping results (elements or passages) were returned grouped by the article from which they came. For the Best in Context Task a single starting point (element start tag or passage start) for each article was needed. We discuss the setup of the track, and the results for the four tasks.
Resumo:
For the analysis of material nonlinearity, an effective shear modulus approach based on the strain control method is proposed in this paper by using point collocation method. Hencky’s total deformation theory is used to evaluate the effective shear modulus, Young’s modulus and Poisson’s ratio, which are treated as spatial field variables. These effective properties are obtained by the strain controlled projection method in an iterative manner. To evaluate the second order derivatives of shape function at the field point, the radial basis function (RBF) in the local support domain is used. Several numerical examples are presented to demonstrate the efficiency and accuracy of the proposed method and comparisons have been made with analytical solutions and the finite element method (ABAQUS).