973 resultados para software creation infrastructure


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Earthwork planning has been considered in this article and a generic block partitioning and modelling approach has been devised to provide strategic plans of various levels of detail. Conceptually this approach is more accurate and comprehensive than others, for instance those that are section based. In response to environmental concerns the metric for decision making was fuel consumption and emissions. Haulage distance and gradient are also included as they are important components of these metrics. Advantageously the fuel consumption metric is generic and captures the physical difficulties of travelling over inclines of different gradients, that is consistent across all hauling vehicles. For validation, the proposed models and techniques have been applied to a real world road project. The numerical investigations have demonstrated that the models can be solved with relatively little CPU time. The proposed block models also result in solutions of superior quality, i.e. they have reduced fuel consumption and cost. Furthermore the plans differ considerably from those based solely upon a distance based metric thus demonstrating a need for industry to reflect upon their current practices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of hedonic models to estimate the effects of various factors on house prices is well established. This paper examines a number of international hedonic house price models that seek to quantify the effect of infrastructure charges on new house prices. This work is an important factor in the housing affordability debate, with many governments in high growth areas having user-pays infrastructure charging policies operating in tandem with housing affordability objectives, with no empirical evidence on the impact of one on the other. This research finds there is little consistency between existing models and the data sets utilised. Specification appears dependent upon data availability rather than sound theoretical grounding. This may lead to a lack of external validity with model specification dependent upon data availability rather than sound theoretical grounding.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The fastest-growing segment of jobs in the creative sector are in those firms that provide creative services to other sectors (Hearn, Goldsmith, Bridgstock, Rodgers 2014, this volume; Cunningham 2014, this volume). There are also a large number of Creative Services (Architecture and Design, Advertising and Marketing, Software and Digital Content occupations) workers embedded in organizations in other industry sectors (Cunningham and Higgs 2009). Ben Goldsmith (2014, this volume) shows, for example, that the Financial Services sector is the largest employer of digital creative talent in Australia. But why should this be? We argue it is because ‘knowledge-based intangibles are increasingly the source of value creation and hence of sustainable competitive advantage (Mudambi 2008, 186). This value creation occurs primarily at the research and development (R and D) and the marketing ends of the supply chain. Both of these areas require strong creative capabilities in order to design for, and to persuade, consumers. It is no surprise that Jess Rodgers (2014, this volume), in a study of Australia’s Manufacturing sector, found designers and advertising and marketing occupations to be the most numerous creative occupations. Greg Hearn and Ruth Bridgstock (2013, forthcoming) suggest ‘the creative heart of the creative economy […] is the social and organisational routines that manage the generation of cultural novelty, both tacit and codified, internal and external, and [cultural novelty’s] combination with other knowledges […] produce and capture value’. 2 Moreover, the main “social and organisational routine” is usually a team (for example, Grabher 2002; 2004).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Collaborative infrastructure projects use hybrid formal and informal governance structures to manage transactions. Based on previous desk-top research, the authors identified the key mechanisms underlying project governance, and posited the performance implications of the governance (Chen et al. 2012). The current paper extends that qualitative research by testing the veracity of those findings using data from 320 Australian construction organisations. The results provide, for the first time, reliable and valid scales to measure governance and performance of collaborative projects, and the relationship between them. The results confirm seven of seven hypothesised governance mechanisms; 30 of 43 hypothesised underlying actions; eight of eight hypothesised key performance indicators; and the dual importance of formal and informal governance. A startling finding of the study was that the implementation intensity of informal mechanisms (non-contractual conditions) is a greater predictor of project performance variance than that of formal mechanisms (contractual conditions). Further, contractual conditions do not directly impact project performance; instead their impact is mediated by the non-contractual features of a project. Obligations established under the contract are not sufficient to optimise project performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Australia, collaborative contracts, and in particular, project alliances, have been increasingly used to govern infrastructure projects. These contracts use formal and informal governance mechanisms to manage the delivery of infrastructure projects. Formal mechanisms such as financial risk sharing are specified in the contract, while informal mechanisms such as integrated teams are not. Given that the literature contains a multiplicity of often untestable definitions, this paper reports on a review of the literature to operationalize the concepts of formal and informal governance. This work is the first phase of a study that will examine the optimal balance of formal and informal governance structures. Desk-top review of leading journals in the areas of construction management and business management, as well as recent government documents and industry guidelines, was undertaken to to conceptualise and operationalize formal and informal governance mechanisms. The study primarily draws on transaction-cost economics (e.g. Williamson 1979; Williamson 1991), relational contract theory (Feinman 2000; Macneil 2000) and social psychology theory (e.g. Gulati 1995). Content analysis of the literature was undertaken to identify key governance mechanisms. Content analysis is a commonly used methodology in the social sciences area. It provides rich data through the systematic and objective review of literature (Krippendorff 2004). NVivo 9, a qualitative data analysis software package, was used to assist in this process. A previous study by the authors identified that formal governance mechanisms can be classified into seven measurable categories: (1) negotiated cost, (2) competitive cost, (3) commercial framework, (4) risk and reward sharing, (5) qualitative performance, (6) collaborative multi-party agreement, and (7) early contractor involvement. Similarly, informal governance mechanisms can be classified into four measureable categories: (1) leadership structure, (2) integrated team, (3) team workshops, and (4) joint management system. This paper explores and further defines the key operational characteristics of each mechanism category, highlighting its impact on value for money in alliance project delivery. The paper’s contribution is that it provides the basis for future research to compare the impact of a range of individual mechanisms within each category, as a means of improving the performance of construction projects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Collaborative contracting has emerged over the past 15 years as an innovative project delivery framework that is particularly suited to infrastructure projects. Australia leads the world in the development of project and program alliance approaches to collaborative delivery. These approaches are considered to promise superior project results. However, very little is known about the learning routines that are most widely used in support of collaborative projects in general and alliance projects in particular. The literature on absorptive capacity and dynamic capabilities indicates that such learning enhances project performance. The learning routines employed at corporate level during the operation of collaborative infrastructure projects in Australia were examined through a large survey conducted in 2013. This paper presents a descriptive summary of the preliminary findings. The survey captured the experiences of 320 practitioners of collaborative construction projects, including public and private sector clients, contractors, consultants and suppliers (three per cent of projects were located in New Zealand, but for brevity’s sake the sample is referred to as Australian). The majority of projects identified used alliances (78.6%); whilst 9% used Early Contractor Involvement (ECI) contracts and 2.7% used Early Tender Involvement contracts, which are ‘slimmer’ types of collaborative contract. The remaining 9.7% of respondents used traditional contracts that employed some collaborative elements. The majority of projects were delivered for public sector clients (86.3%), and/or clients experienced with asset procurement (89.6%). All of the projects delivered infrastructure assets; one third in the road sector, one third in the water sector, one fifth in the rail sector, and the rest spread across energy, building and mining. Learning routines were explored within three interconnected phases: knowledge exploration, transformation and exploitation. The results show that explorative and exploitative learning routines were applied to a similar extent. Transformative routines were applied to a relatively low extent. It was also found that the most highly applied routine is ‘regularly applying new knowledge to collaborative projects’; and the least popular routine was ‘staff incentives to encourage information sharing about collaborative projects’. Future research planned by the authors will examine the impact of these routines on project performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fire incident in buildings is common, so the fire safety design of the framed structure is imperative, especially for the unprotected or partly protected bare steel frames. However, software for structural fire analysis is not widely available. As a result, the performance-based structural fire design is urged on the basis of using user-friendly and conventional nonlinear computer analysis programs so that engineers do not need to acquire new structural analysis software for structural fire analysis and design. The tool is desired to have the capacity of simulating the different fire scenarios and associated detrimental effects efficiently, which includes second-order P-D and P-d effects and material yielding. Also the nonlinear behaviour of large-scale structure becomes complicated when under fire, and thus its simulation relies on an efficient and effective numerical analysis to cope with intricate nonlinear effects due to fire. To this end, the present fire study utilizes a second order elastic/plastic analysis software NIDA to predict structural behaviour of bare steel framed structures at elevated temperatures. This fire study considers thermal expansion and material degradation due to heating. Degradation of material strength with increasing temperature is included by a set of temperature-stress-strain curves according to BS5950 Part 8 mainly, which implicitly allows for creep deformation. This finite element stiffness formulation of beam-column elements is derived from the fifth-order PEP element which facilitates the computer modeling by one member per element. The Newton-Raphson method is used in the nonlinear solution procedure in order to trace the nonlinear equilibrium path at specified elevated temperatures. Several numerical and experimental verifications of framed structures are presented and compared against solutions in literature. The proposed method permits engineers to adopt the performance-based structural fire analysis and design using typical second-order nonlinear structural analysis software.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The detection and correction of defects remains among the most time consuming and expensive aspects of software development. Extensive automated testing and code inspections may mitigate their effect, but some code fragments are necessarily more likely to be faulty than others, and automated identification of fault prone modules helps to focus testing and inspections, thus limiting wasted effort and potentially improving detection rates. However, software metrics data is often extremely noisy, with enormous imbalances in the size of the positive and negative classes. In this work, we present a new approach to predictive modelling of fault proneness in software modules, introducing a new feature representation to overcome some of these issues. This rank sum representation offers improved or at worst comparable performance to earlier approaches for standard data sets, and readily allows the user to choose an appropriate trade-off between precision and recall to optimise inspection effort to suit different testing environments. The method is evaluated using the NASA Metrics Data Program (MDP) data sets, and performance is compared with existing studies based on the Support Vector Machine (SVM) and Naïve Bayes (NB) Classifiers, and with our own comprehensive evaluation of these methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Developer paid charges or contributions are a commonly used infrastructure funding mechanism for local governments. However, developers claim that these costs are merely passed on to home buyers, with adverse effects to housing affordability. Despite a plethora of government reports and industry advocacy, there remains no empirical evidence in Australia to confirm or quantify this passing on effect to home buyers and hence no data for which governments to base policy decision upon. This paper examines the question of who really pays for urban infrastructure and the impact of infrastructure charges on housing affordability. It presents the findings of a number of international empirical studies that provide evidence that infrastructure charges do increase house prices. Based on international findings, and in the absence of any Australian research, then these findings suggest that if the international findings are transferable, then there is empirical evidence to support the proposition that developer paid infrastructure charges are a significant contributor to increasing house prices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This column features a conversation (via email, image sharing, and Facetime) that took place over several months between two international theorists of digital filmmaking from schools in two countries—Professors Jason Ranker (Portland State University, Oregon, United States) and Kathy Mills (Queensland University of Technology, Australia). The authors discuss emerging ways of thinking about video making, sharing tips and anecdotes from classroom experience to inspire teachers to explore with adolescents the meaning potentials of digital video creation. The authors briefly discuss their previous work in this area, and then move into a discussion of how the material spaces in which students create videos profoundly shape the films' meanings and significance. The article ends with a discussion of how students can take up creative new directions, pushing the boundaries of the potentials of classroom video making and uncovering profound uses of the medium.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Games and the broader interactive entertainment industry are the major ‘born global/born digital’ creative industry. The videogame industry (formally referred to as interactive entertainment) is the economic sector that develops, markets and sells videogames to millions of people worldwide. There are over 11 countries with revenues of over $1 billion. This number was expected to grow 9.1 per cent annually to $48.9 in 2011 and $68 billion in 2012, making it the fastest-growing component of the international media sector (Scanlon, 2007; Caron, 2008).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Searching for health advice on the web is becoming increasingly common. Because of the great importance of this activity for patients and clinicians and the effect that incorrect information may have on health outcomes, it is critical to present relevant and valuable information to a searcher. Previous evaluation campaigns on health information retrieval (IR) have provided benchmarks that have been widely used to improve health IR and record these improvements. However, in general these benchmarks have targeted the specialised information needs of physicians and other healthcare workers. In this paper, we describe the development of a new collection for evaluation of effectiveness in IR seeking to satisfy the health information needs of patients. Our methodology features a novel way to create statements of patients’ information needs using realistic short queries associated with patient discharge summaries, which provide details of patient disorders. We adopt a scenario where the patient then creates a query to seek information relating to these disorders. Thus, discharge summaries provide us with a means to create contextually driven search statements, since they may include details on the stage of the disease, family history etc. The collection will be used for the first time as part of the ShARe/-CLEF 2013 eHealth Evaluation Lab, which focuses on natural language processing and IR for clinical care.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The article introduces a novel platform for conducting controlled and risk-free driving and traveling behavior studies, called Cyber-Physical System Simulator (CPSS). The key features of CPSS are: (1) simulation of multiuser immersive driving in a threedimensional (3D) virtual environment; (2) integration of traffic and communication simulators with human driving based on dedicated middleware; and (3) accessibility of multiuser driving simulator on popular software and hardware platforms. This combination of features allows us to easily collect large-scale data on interesting phenomena regarding the interaction between multiple user drivers, which is not possible with current single-user driving simulators. The core original contribution of this article is threefold: (1) we introduce a multiuser driving simulator based on DiVE, our original massively multiuser networked 3D virtual environment; (2) we introduce OpenV2X, a middleware for simulating vehicle-to-vehicle and vehicle to infrastructure communication; and (3) we present two experiments based on our CPSS platform. The first experiment investigates the “rubbernecking” phenomenon, where a platoon of four user drivers experiences an accident in the oncoming direction of traffic. Second, we report on a pilot study about the effectiveness of a Cooperative Intelligent Transport Systems advisory system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Software development settings provide a great opportunity for CSCW researchers to study collaborative work. In this paper, we explore a specific work practice called bug reproduction that is a part of the software bug-fixing process. Bug re-production is a highly collaborative process by which software developers attempt to locally replicate the ‘environment’ within which a bug was originally encountered. Customers, who encounter bugs in their everyday use of systems, play an important role in bug reproduction as they provide useful information to developers, in the form of steps for reproduction, software screenshots, trace logs, and other ways to describe a problem. Bug reproduction, however, poses major hurdles in software maintenance as it is often challenging to replicate the contextual aspects that are at play at the customers’ end. To study the bug reproduction process from a human-centered perspective, we carried out an ethnographic study at a multinational engineering company. Using semi-structured interviews, a questionnaire and half-a-day observation of sixteen software developers working on different software maintenance projects, we studied bug reproduction. In this pa-per, we present a holistic view of bug reproduction practices from a real-world set-ting and discuss implications for designing tools to address the challenges developers face during bug reproduction.