936 resultados para Effort alimentaire
Resumo:
The vision of a digital earth (DE) is continuously evolving, and the next-generation infrastructures, platforms and applications are being implemented. In this article, we attempt to initiate a debate within the DE and with affine communities about 'why' a digital earth curriculum (DEC) is needed, 'how' it should be developed, and 'what' it could look like. It is impossible to do justice to the Herculean effort of DEC development without extensive consultations with the broader community. We propose a frame for the debate (what, why, and how of a DEC) and a rationale for and elements of a curriculum for educating the coming generations of digital natives and indicate possible realizations. We particularly argue that a DEC is not a déjà vu of classical research and training agendas of geographic information science, remote sensing, and similar fields by emphasizing its unique characteristics.
Resumo:
Consensus was developed by the remote sensing community during the 1980s and early 1990s regarding the need for an organized approach to teaching remote sensing fundamentals for collegiate institutions. Growth of the remote sensing industry might be seriously hampered without concerted efforts to bolster the capacity to teach state-of-the-practice remote sensing theory and practice to the next generation of professionals. A concerted effort of educators, researchers, government, and industry began in 1992 to meet these demands leading to the creation of the Remote Sensing Core Curriculum. The RSCC is currently sustained by cooperative efforts of the ASPRS, ICRSE, NASA, NCGIA, and others in the remote sensing community. Growth of the RSCC into the K-12 community resulted from its Internet teaching foundation that enables comprehensive and response reference links to the whole of the education community.
Resumo:
Studies on quantitative fit analysis of precontoured fracture fixation plates emerged within the last few years and therefore, there is a wide research gap in this area. Quantitative fit assessment facilitates the measure of the gap between a fracture fixation plate and the underlying bone, and specifies the required plate fit criteria. For clinically meaningful fit assessment outcome, it is necessary to establish the appropriate criteria and parameter. The present paper studies this subject and recommends using multiple fit criteria and the maximum distance between the plate and underlying bone as fit parameter for clinically relevant outcome. We also propose the development of a software tool for automatic plate positioning and fit assessment for the purpose of implant design validation and optimization in an effort to provide better fitting implant that can assist proper fracture healing. The fundamental specifications of the software are discussed.
Resumo:
This thesis opens up the design space for awareness research in CSCW and HCI. By challenging the prevalent understanding of roles in awareness processes and exploring different mechanisms for actively engaging users in the awareness process, this thesis provides a better understanding of the complexity of these processes and suggests practical solutions for designing and implementing systems that support active awareness. Mutual awareness, a prominent research topic in the fields of Computer-Supported Cooperative Work (CSCW) and Human-Computer Interaction (HCI) refers to a fundamental aspect of a person’s work: their ability to gain a better understanding of a situation by perceiving and interpreting their co-workers actions. Technologically-mediated awareness, used to support co-workers across distributed settings, distinguishes between the roles of the actor, whose actions are often limited to being the target of an automated data gathering processes, and the receiver, who wants to be made aware of the actors’ actions. This receiver-centric view of awareness, focusing on helping receivers to deal with complex sets of awareness information, stands in stark contrast to our understanding of awareness as social process involving complex interactions between both actors and receivers. It fails to take into account an actors’ intimate understanding of their own activities and the contribution that this subjective understanding could make in providing richer awareness information. In this thesis I challenge the prevalent receiver-centric notion of awareness, and explore the conceptual foundations, design, implementation and evaluation of an alternative active awareness approach by making the following five contributions. Firstly, I identify the limitations of existing awareness research and solicit further evidence to support the notion of active awareness. I analyse ethnographic workplace studies that demonstrate how actors engage in an intricate interplay involving the monitoring of their co-workers progress and displaying aspects of their activities that may be of relevance to others. The examination of a large body of awareness research reveals that while disclosing information is a common practice in face-to-face collaborative settings it has been neglected in implementations of technically mediated awareness. Based on these considerations, I introduce the notion of intentional disclosure to describe the action of users actively and deliberately contributing awareness information. I consider challenges and potential solutions for the design of active awareness. I compare a range of systems, each allowing users to share information about their activities at various levels of detail. I discuss one of the main challenges to active awareness: that disclosing information about activities requires some degree of effort. I discuss various representations of effort in collaborative work. These considerations reveal that there is a trade-off between the richness of awareness information and the effort required to provide this information. I propose a framework for active awareness, aimed to help designers to understand the scope and limitations of different types of intentional disclosure. I draw on the identified richness/effort trade-off to develop two types of intentional disclosure, both of which aim to facilitate the disclosure of information while reducing the effort required to do so. For both of these approaches, direct and indirect disclosure, I delineate how they differ from related approaches and define a set of design criteria that is intended to guide their implementation. I demonstrate how the framework of active awareness can be practically applied by building two proof-of-concept prototypes that implement direct and indirect disclosure respectively. AnyBiff, implementing direct disclosure, allows users to create, share and use shared representations of activities in order to express their current actions and intentions. SphereX, implementing indirect disclosure, represents shared areas of interests or working context, and links sets of activities to these representations. Lastly, I present the results of the qualitative evaluation of the two prototypes and analyse the results with regard to the extent to which they implemented their respective disclosure mechanisms and supported active awareness. Both systems were deployed and tested in real world environments. The results for AnyBiff showed that users developed a wide range of activity representations, some unanticipated, and actively used the system to disclose information. The results further highlighted a number of design considerations relating to the relationship between awareness and communication, and the role of ambiguity. The evaluation of SphereX validated the feasibility of the indirect disclosure approach. However, the study highlighted the challenges of implementing cross-application awareness support and translating the concept to users. The study resulted in design recommendations aimed to improve the implementation of future systems.
Resumo:
E-mail spam has remained a scourge and menacing nuisance for users, internet and network service operators and providers, in spite of the anti-spam techniques available; and spammers are relentlessly circumventing these anti-spam techniques embedded or installed in form of software products on both client and server sides of both fixed and mobile devices to their advantage. This continuous evasion degrades the capabilities of these anti-spam techniques as none of them provides a comprehensive reliable solution to the problem posed by spam and spammers. Major problem for instance arises when these anti-spam techniques misjudge or misclassify legitimate emails as spam (false positive); or fail to deliver or block spam on the SMTP server (false negative); and the spam passes-on to the receiver, and yet this server from where it originates does not notice or even have an auto alert service to indicate that the spam it was designed to prevent has slipped and moved on to the receiver’s SMTP server; and the receiver’s SMTP server still fail to stop the spam from reaching user’s device and with no auto alert mechanism to inform itself of this inability; thus causing a staggering cost in loss of time, effort and finance. This paper takes a comparative literature overview of some of these anti-spam techniques, especially the filtering technological endorsements designed to prevent spam, their merits and demerits to entrench their capability enhancements, as well as evaluative analytical recommendations that will be subject to further research.
Resumo:
Public sector organisations (PSOs) operate in information-intensive environments often within operational contexts where efficiency is a goal. What's more, the rapid adoption of IT is expected to facilitate good governance within public sector organisations but it often clashes with the bureaucratic culture of these organisations. Accordingly, models such as IT Governance (ITG) and government reform -in particular the new public management (NPM)- were introduced in PSOs in an effort to address the inefficiencies of bureaucracy and under performance. This work explores the potential effect of change in political direction and policy on the stability of IT governance in Australian public sector organisations. The aim of this paper is to examine implications of a change of government and the resulting political environment on the effectiveness of the audit function of ITG. The empirical data discussed here indicate that a number of aspects of audit functionality were negatively affected by change in political direction and resultant policy changes. The results indicate a perceived decline in capacity and capability which in turn disrupts the stability of IT governance systems in public sector organisations.
Resumo:
Unsaturated water flow in soil is commonly modelled using Richards’ equation, which requires the hydraulic properties of the soil (e.g., porosity, hydraulic conductivity, etc.) to be characterised. Naturally occurring soils, however, are heterogeneous in nature, that is, they are composed of a number of interwoven homogeneous soils each with their own set of hydraulic properties. When the length scale of these soil heterogeneities is small, numerical solution of Richards’ equation is computationally impractical due to the immense effort and refinement required to mesh the actual heterogeneous geometry. A classic way forward is to use a macroscopic model, where the heterogeneous medium is replaced with a fictitious homogeneous medium, which attempts to give the average flow behaviour at the macroscopic scale (i.e., at a scale much larger than the scale of the heterogeneities). Using the homogenisation theory, a macroscopic equation can be derived that takes the form of Richards’ equation with effective parameters. A disadvantage of the macroscopic approach, however, is that it fails in cases when the assumption of local equilibrium does not hold. This limitation has seen the introduction of two-scale models that include at each point in the macroscopic domain an additional flow equation at the scale of the heterogeneities (microscopic scale). This report outlines a well-known two-scale model and contributes to the literature a number of important advances in its numerical implementation. These include the use of an unstructured control volume finite element method and image-based meshing techniques, that allow for irregular micro-scale geometries to be treated, and the use of an exponential time integration scheme that permits both scales to be resolved simultaneously in a completely coupled manner. Numerical comparisons against a classical macroscopic model confirm that only the two-scale model correctly captures the important features of the flow for a range of parameter values.
Resumo:
In Chapters 1 through 9 of the book (with the exception of a brief discussion on observers and integral action in Section 5.5 of Chapter 5) we considered constrained optimal control problems for systems without uncertainty, that is, with no unmodelled dynamics or disturbances, and where the full state was available for measurement. More realistically, however, it is necessary to consider control problems for systems with uncertainty. This chapter addresses some of the issues that arise in this situation. As in Chapter 9, we adopt a stochastic description of uncertainty, which associates probability distributions to the uncertain elements, that is, disturbances and initial conditions. (See Section 12.6 for references to alternative approaches to model uncertainty.) When incomplete state information exists, a popular observer-based control strategy in the presence of stochastic disturbances is to use the certainty equivalence [CE] principle, introduced in Section 5.5 of Chapter 5 for deterministic systems. In the stochastic framework, CE consists of estimating the state and then using these estimates as if they were the true state in the control law that results if the problem were formulated as a deterministic problem (that is, without uncertainty). This strategy is motivated by the unconstrained problem with a quadratic objective function, for which CE is indeed the optimal solution (˚Astr¨om 1970, Bertsekas 1976). One of the aims of this chapter is to explore the issues that arise from the use of CE in RHC in the presence of constraints. We then turn to the obvious question about the optimality of the CE principle. We show that CE is, indeed, not optimal in general. We also analyse the possibility of obtaining truly optimal solutions for single input linear systems with input constraints and uncertainty related to output feedback and stochastic disturbances.We first find the optimal solution for the case of horizon N = 1, and then we indicate the complications that arise in the case of horizon N = 2. Our conclusion is that, for the case of linear constrained systems, the extra effort involved in the optimal feedback policy is probably not justified in practice. Indeed, we show by example that CE can give near optimal performance. We thus advocate this approach in real applications.
Resumo:
Determining similarity between business process models has recently gained interest in the business process management community. So far similarity was addressed separately either at semantic or structural aspect of process models. Also, most of the contributions that measure similarity of process models assume an ideal case when process models are enriched with semantics - a description of meaning of process model elements. However, in real life this results in a heavy human effort consuming pre-processing phase which is often not feasible. In this paper we propose an automated approach for querying a business process model repository for structurally and semantically relevant models. Similar to the search on the Internet, a user formulates a BPMN-Q query and as a result receives a list of process models ordered by relevance to the query. We provide a business process model search engine implementation for evaluation of the proposed approach.
Resumo:
In order to execute, study, or improve operating procedures, companies document them as business process models. Often, business process analysts capture every single exception handling or alternative task handling scenario within a model. Such a tendency results in large process specifications. The core process logic becomes hidden in numerous modeling constructs. To fulfill different tasks, companies develop several model variants of the same business process at different abstraction levels. Afterwards, maintenance of such model groups involves a lot of synchronization effort and is erroneous. We propose an abstraction technique that allows generalization of process models. Business process model abstraction assumes a detailed model of a process to be available and derives coarse-grained models from it. The task of abstraction is to tell significant model elements from insignificant ones and to reduce the latter. We propose to learn insignificant process elements from supplementary model information, e.g., task execution time or frequency of task occurrence. Finally, we discuss a mechanism for user control of the model abstraction level – an abstraction slider.
Resumo:
Process models provide companies efficient means for managing their business processes. Tasks where process models are employed are different by nature and require models of various abstraction levels. However, maintaining several models of one business process involves a lot of synchronization effort and is erroneous. Business process model abstraction assumes a detailed model of a process to be available and derives coarse grained models from it. The task of abstraction is to tell significant model elements from insignificant ones and to reduce the latter. In this paper we argue that process model abstraction can be driven by different abstraction criteria. Criterion choice depends on a task which abstraction facilitates. We propose an abstraction slider - a mechanism that allows user control of the model abstraction level. We discuss examples of combining the slider with different abstraction criteria and sets of process model transformation rules.
Resumo:
Over several decades, academics around the world have investigated the necessary tools, techniques, and conditions which would allow BIM (building information modeling) to become a positive force in the world of construction. As the research results matured, BIM started to become commercially available. Researchers and many in industry soon realized that BIM, as a technological innovation, was, in and of itself, not the end point in the journey. The technical adoption of BIM has to be supported by process and culture change within organizations to make a real impact on a project (for example, see AECbytes Viewpoint #35 by Chuck Eastman, Paul Teicholz, Rafael Sacks and Kathleen Liston). Current academic research aims to understand the steps beyond BIM, which will help chart the future of our industry over the coming decades. This article describes an international research effort in this area, coordinated by the Integrated Design and Delivery Solutions (IDDS) initiative of the CIB (International Council for Research and Innovation in Building and Construction). We hope that it responds to and extends the discussion initiated by Brian Lighthart in AECbytes Viewpoint #56, which asked about who is charting future BIM directions.
Resumo:
Recent information systems development using agile project management has yielded a 50% reduction in effort, together with significant improvements in organisational skills, productivity, quality and business satisfaction.
Resumo:
The aim of the current study was to examine the associations between a number of individual factors (demographic factors (age and gender), personality factors, risk-taking propensity, attitudes towards drink driving, and perceived legitimacy of drink driving enforcement) and how they influence the self-reported likelihood of drink driving. The second aim of this study was to examine the potential of attitudes mediating the relationship between risk-taking and self-reported likelihood of drink driving. In total, 293 Queensland drivers volunteered to participate in an online survey that assessed their self-reported likelihood to drink drive in the next month, demographics, traffic-related demographics, personality factors, risk-taking propensity, attitudes towards drink driving, and perceived legitimacy of drink driving enforcement. An ordered logistic regression analysis was utilised to evaluate the first aim of the study; at the first step the demographic variables were entered; at step two the personality and risk-taking were entered; at the third step, the attitudes and perceptions of legitimacy variables were entered. Being a younger driver and having a high risk-taking propensity were related to self-reported likelihood of drink driving. However, when the attitudes variable was entered, these individual factors were no longer significant; with attitudes being the most important predictor of self-reported drink driving likelihood. A significant mediation model was found with the second aim of the study, such that attitudes mediated the relationship between risk-taking and self-reported likelihood of drink driving. Considerable effort and resources are utilised by traffic authorities to reducing drink driving on the Australian road network. Notwithstanding these efforts, some participants still had some positive attitudes towards drink driving and reported that they were likely to drink drive in the future. These findings suggest that more work is needed to address attitudes regarding the dangerousness of drink driving.
Resumo:
Recent advances suggest that encoding images through Symmetric Positive Definite (SPD) matrices and then interpreting such matrices as points on Riemannian manifolds can lead to increased classification performance. Taking into account manifold geometry is typically done via (1) embedding the manifolds in tangent spaces, or (2) embedding into Reproducing Kernel Hilbert Spaces (RKHS). While embedding into tangent spaces allows the use of existing Euclidean-based learning algorithms, manifold shape is only approximated which can cause loss of discriminatory information. The RKHS approach retains more of the manifold structure, but may require non-trivial effort to kernelise Euclidean-based learning algorithms. In contrast to the above approaches, in this paper we offer a novel solution that allows SPD matrices to be used with unmodified Euclidean-based learning algorithms, with the true manifold shape well-preserved. Specifically, we propose to project SPD matrices using a set of random projection hyperplanes over RKHS into a random projection space, which leads to representing each matrix as a vector of projection coefficients. Experiments on face recognition, person re-identification and texture classification show that the proposed approach outperforms several recent methods, such as Tensor Sparse Coding, Histogram Plus Epitome, Riemannian Locality Preserving Projection and Relational Divergence Classification.