977 resultados para Efficient Solutions


Relevância:

20.00% 20.00%

Publicador:

Resumo:

IT-supported field data management benefits on-site construction management by improving accessibility to the information and promoting efficient communication between project team members. However, most of on-site safety inspections still heavily rely on subjective judgment and manual reporting processes and thus observers’ experiences often determine the quality of risk identification and control. This study aims to develop a methodology to efficiently retrieve safety-related information so that the safety inspectors can easily access to the relevant site safety information for safer decision making. The proposed methodology consists of three stages: (1) development of a comprehensive safety database which contains information of risk factors, accident types, impact of accidents and safety regulations; (2) identification of relationships among different risk factors based on statistical analysis methods; and (3) user-specified information retrieval using data mining techniques for safety management. This paper presents an overall methodology and preliminary results of the first stage research conducted with 101 accident investigation reports.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present WebPut, a prototype system that adopts a novel web-based approach to the data imputation problem. Towards this, Webput utilizes the available information in an incomplete database in conjunction with the data consistency principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme that efficiently leverages our suite of data imputation queries to automatically select the most effective imputation query for each missing value. A greedy iterative algorithm is also proposed to schedule the imputation order of the different missing values in a database, and in turn the issuing of their corresponding imputation queries, for improving the accuracy and efficiency of WebPut. Experiments based on several real-world data collections demonstrate that WebPut outperforms existing approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces PartSS, a new partition-based fil- tering for tasks performing string comparisons under edit distance constraints. PartSS offers improvements over the state-of-the-art method NGPP with the implementation of a new partitioning scheme and also improves filtering abil- ities by exploiting theoretical results on shifting and scaling ranges, thus accelerating the rate of calculating edit distance between strings. PartSS filtering has been implemented within two major tasks of data integration: similarity join and approximate membership extraction under edit distance constraints. The evaluation on an extensive range of real-world datasets demonstrates major gain in efficiency over NGPP and QGrams approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

‘Social innovation’ is a construct increasingly used to explain the practices, processes and actors through which sustained positive transformation occurs in the network society (Mulgan, G., Tucker, S., Ali, R., Sander, B. (2007). Social innovation: What it is, why it matters and how can it be accelerated. Oxford:Skoll Centre for Social Entrepreneurship; Phills, J. A., Deiglmeier, K., & Miller, D. T. Stanford Social Innovation Review, 6(4):34–43, 2008.). Social innovation has been defined as a “novel solution to a social problem that is more effective, efficient, sustainable, or just than existing solutions, and for which the value created accrues primarily to society as a whole rather than private individuals.” (Phills,J. A., Deiglmeier, K., & Miller, D. T. Stanford Social Innovation Review, 6 (4):34–43, 2008: 34.) Emergent ideas of social innovation challenge some traditional understandings of the nature and role of the Third Sector, as well as shining a light on those enterprises within the social economy that configure resources in novel ways. In this context, social enterprises – which provide a social or community benefit and trade to fulfil their mission – have attracted considerable policy attention as one source of social innovation within a wider field of action (see Leadbeater, C. (2007). ‘Social enterprise and social innovation: Strategies for the next 10 years’, Cabinet office,Office of the third sector http://www.charlesleadbeater.net/cms xstandard/social_enterprise_innovation.pdf. Last accessed 19/5/2011.). And yet, while social enterprise seems to have gained some symbolic traction in society, there is to date relatively limited evidence of its real world impacts.(Dart, R. Not for Profit Management and Leadership, 14(4):411–424, 2004.) In other words, we do not know much about the social innovation capabilities and effects of social enterprise. In this chapter, we consider the social innovation practices of social enterprise, drawing on Mulgan, G., Tucker, S., Ali, R., Sander, B. (2007). Social innovation: What it is, why it matters and how can it be accelerated. Oxford: Skoll Centre for Social Entrepreneurship: 5) three dimensions of social innovation: new combinations or hybrids of existing elements; cutting across organisational, sectoral and disciplinary boundaries; and leaving behind compelling new relationships. Based on a detailed survey of 365 Australian social enterprises, we examine their self-reported business and mission-related innovations, the ways in which they configure and access resources and the practices through which they diffuse innovation in support of their mission. We then consider how these findings inform our understanding of the social innovation capabilities and effects of social enterprise,and their implications for public policy development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The feasibility of using an in-hardware implementation of a genetic algorithm (GA) to solve the computationally expensive travelling salesman problem (TSP) is explored, especially in regard to hardware resource requirements for problem and population sizes. We investigate via numerical experiments whether a small population size might prove sufficient to obtain reasonable quality solutions for the TSP, thereby permitting relatively resource efficient hardware implementation on field programmable gate arrays (FPGAs). Software experiments on two TSP benchmarks involving 48 and 532 cities were used to explore the extent to which population size can be reduced without compromising solution quality, and results show that a GA allowed to run for a large number of generations with a smaller population size can yield solutions of comparable quality to those obtained using a larger population. This finding is then used to investigate feasible problem sizes on a targeted Virtex-7 vx485T-2 FPGA platform via exploration of hardware resource requirements for memory and data flow operations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an efficient face detection method suitable for real-time surveillance applications. Improved efficiency is achieved by constraining the search window of an AdaBoost face detector to pre-selected regions. Firstly, the proposed method takes a sparse grid of sample pixels from the image to reduce whole image scan time. A fusion of foreground segmentation and skin colour segmentation is then used to select candidate face regions. Finally, a classifier-based face detector is applied only to selected regions to verify the presence of a face (the Viola-Jones detector is used in this paper). The proposed system is evaluated using 640 x 480 pixels test images and compared with other relevant methods. Experimental results show that the proposed method reduces the detection time to 42 ms, where the Viola-Jones detector alone requires 565 ms (on a desktop processor). This improvement makes the face detector suitable for real-time applications. Furthermore, the proposed method requires 50% of the computation time of the best competing method, while reducing the false positive rate by 3.2% and maintaining the same hit rate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reliable communications is one of the major concerns in wireless sensor networks (WSNs). Multipath routing is an effective way to improve communication reliability in WSNs. However, most of existing multipath routing protocols for sensor networks are reactive and require dynamic route discovery. If there are many sensor nodes from a source to a destination, the route discovery process will create a long end-to-end transmission delay, which causes difficulties in some time-critical applications. To overcome this difficulty, the efficient route update and maintenance processes are proposed in this paper. It aims to limit the amount of routing overhead with two-tier routing architecture and introduce the combination of piggyback and trigger update to replace the periodic update process, which is the main source of unnecessary routing overhead. Simulations are carried out to demonstrate the effectiveness of the proposed processes in improvement of total amount of routing overhead over existing popular routing protocols.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advances in algorithms for approximate sampling from a multivariable target function have led to solutions to challenging statistical inference problems that would otherwise not be considered by the applied scientist. Such sampling algorithms are particularly relevant to Bayesian statistics, since the target function is the posterior distribution of the unobservables given the observables. In this thesis we develop, adapt and apply Bayesian algorithms, whilst addressing substantive applied problems in biology and medicine as well as other applications. For an increasing number of high-impact research problems, the primary models of interest are often sufficiently complex that the likelihood function is computationally intractable. Rather than discard these models in favour of inferior alternatives, a class of Bayesian "likelihoodfree" techniques (often termed approximate Bayesian computation (ABC)) has emerged in the last few years, which avoids direct likelihood computation through repeated sampling of data from the model and comparing observed and simulated summary statistics. In Part I of this thesis we utilise sequential Monte Carlo (SMC) methodology to develop new algorithms for ABC that are more efficient in terms of the number of model simulations required and are almost black-box since very little algorithmic tuning is required. In addition, we address the issue of deriving appropriate summary statistics to use within ABC via a goodness-of-fit statistic and indirect inference. Another important problem in statistics is the design of experiments. That is, how one should select the values of the controllable variables in order to achieve some design goal. The presences of parameter and/or model uncertainty are computational obstacles when designing experiments but can lead to inefficient designs if not accounted for correctly. The Bayesian framework accommodates such uncertainties in a coherent way. If the amount of uncertainty is substantial, it can be of interest to perform adaptive designs in order to accrue information to make better decisions about future design points. This is of particular interest if the data can be collected sequentially. In a sense, the current posterior distribution becomes the new prior distribution for the next design decision. Part II of this thesis creates new algorithms for Bayesian sequential design to accommodate parameter and model uncertainty using SMC. The algorithms are substantially faster than previous approaches allowing the simulation properties of various design utilities to be investigated in a more timely manner. Furthermore the approach offers convenient estimation of Bayesian utilities and other quantities that are particularly relevant in the presence of model uncertainty. Finally, Part III of this thesis tackles a substantive medical problem. A neurological disorder known as motor neuron disease (MND) progressively causes motor neurons to no longer have the ability to innervate the muscle fibres, causing the muscles to eventually waste away. When this occurs the motor unit effectively ‘dies’. There is no cure for MND, and fatality often results from a lack of muscle strength to breathe. The prognosis for many forms of MND (particularly amyotrophic lateral sclerosis (ALS)) is particularly poor, with patients usually only surviving a small number of years after the initial onset of disease. Measuring the progress of diseases of the motor units, such as ALS, is a challenge for clinical neurologists. Motor unit number estimation (MUNE) is an attempt to directly assess underlying motor unit loss rather than indirect techniques such as muscle strength assessment, which generally is unable to detect progressions due to the body’s natural attempts at compensation. Part III of this thesis builds upon a previous Bayesian technique, which develops a sophisticated statistical model that takes into account physiological information about motor unit activation and various sources of uncertainties. More specifically, we develop a more reliable MUNE method by applying marginalisation over latent variables in order to improve the performance of a previously developed reversible jump Markov chain Monte Carlo sampler. We make other subtle changes to the model and algorithm to improve the robustness of the approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Waste management and minimisation is considered to be an important issue for achieving sustainability in the construction industry. Retrofit projects generate less waste than demolitions and new builds, but they possess unique features and require waste management approaches that are different to traditional new builds. With the increasing demand for more energy efficient and environmentally sustainable office spaces, the office building retrofit market is growing in capital cities around Australia with a high level of refurbishment needed for existing aging properties. Restricted site space and uncertain delivery process in these projects make it a major challenge to manage waste effectively. The labour-intensive nature of retrofit projects creates the need for the involvement of small and medium enterprises (SMEs) as subcontractors in on-site works. SMEs are familiar with on-site waste generation but are not as actively motivated and engaged in waste management activities as the stakeholders in other construction projects in the industry. SMEs’ responsibilities for waste management in office building retrofit projects need to be identified and adapted to the work delivery processes and the waste management system supported by project stakeholders. The existing literature provides an understanding of how to manage construction waste that is already generated and how to increase the waste recovery rate for office building retrofit projects. However, previous research has not developed theories or practical solutions that can guide project stakeholders to understand the specific waste generation process and effectively plan for and manage waste in ongoing project works. No appropriate method has been established for the potential role and capability of SMEs to manage and minimise waste from their subcontracting works. This research probes into the characteristics of office building retrofit project delivery with the aim to develop specific tools to manage waste and incorporate SMEs in this process in an appropriate and effective way. Based on an extensive literature review, the research firstly developed a questionnaire survey to identify the critical factors of on-site waste generation in office building retrofit projects. Semi-structured interviews were then utilised to validate the critical waste factors and establish the interrelationships between the factors. The interviews served another important function of identifying the current problems of waste management in the industry and the performance of SMEs in this area. Interviewees’ opinions on remedies to the problems were also collected. On the foundation of the findings from the questionnaire survey and semi-structured interviews, two waste planning and management strategies were identified for the dismantling phase and fit-out phase of office building retrofit projects, respectively. Two models were then established to organize SMEs’ waste management activities, including a work process-based integrated waste planning model for the dismantling phase and a system dynamics model for the fit-out phase. In order to apply the models in real practice, procedures were developed to guide SMEs’ work flow in on-site waste planning and management. In addition, a collaboration framework was established for SMEs and other project stakeholders for effective waste planning and management. Furthermore, an organisational engagement strategy was developed to improve SME waste management practices. Three case studies were conducted to validate and finalise the research deliverables. This research extends the current literature that mostly covers waste management plans in new build projects, by presenting the knowledge and understanding of addressing waste problems in retrofit projects. It provides practical tools and guidance for industry practitioners to effectively manage the waste generation processes in office building retrofit projects. It can also promote industry-level recognition of the role of SMEs and their performance in on-site waste management.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To protect the health information security, cryptography plays an important role to establish confidentiality, authentication, integrity and non-repudiation. Keys used for encryption/decryption and digital signing must be managed in a safe, secure, effective and efficient fashion. The certificate-based Public Key Infrastructure (PKI) scheme may seem to be a common way to support information security; however, so far, there is still a lack of successful large-scale certificate-based PKI deployment in the world. In addressing the limitations of the certificate-based PKI scheme, this paper proposes a non-certificate-based key management scheme for a national e-health implementation. The proposed scheme eliminates certificate management and complex certificate validation procedures while still maintaining security. It is also believed that this study will create a new dimension to the provision of security for the protection of health information in a national e-health environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter considers to what degree the careers of women with young families, both in and out of paid employment, are lived as contingent, intersubjective projects pursued across time and space, in the social condition of growing biographical possibilities and uneven social/ideological change. Their resolutions of competing priorities by engaging in various permutations of home-work and paid work are termed ‘workable solutions’, with an intentional play on the double sense of ‘work’ – firstly as labour, thus being able to perform work, whether paid or not; secondly as in being able to make things work or function in the family unit’s best interests, however defined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stochastic differential equations (SDEs) arise fi om physical systems where the parameters describing the system can only be estimated or are subject to noise. There has been much work done recently on developing numerical methods for solving SDEs. This paper will focus on stability issues and variable stepsize implementation techniques for numerically solving SDEs effectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Increasing use of computerized systems in our daily lives creates new adversarial opportunities for which complex mechanisms are exploited to mend the rapid development of new attacks. Behavioral Biometrics appear as one of the promising response to these attacks. But it is a relatively new research area, specific frameworks for evaluation and development of behavioral biometrics solutions could not be found yet. In this paper we present a conception of a generic framework and runtime environment which will enable researchers to develop, evaluate and compare their behavioral biometrics solutions with repeatable experiments under the same conditions with the same data.