992 resultados para Task structure
Resumo:
The present study examines knowledge of the discourse-appropriateness of Clitic Right Dislocation (CLRD) in a population of Heritage (HS) and Spanish-dominant Native Speakers in order to test the predictions of the Interface Hypothesis (IH; Sorace 2011). The IH predicts that speakers in language contact situations will experience difficulties with integrating information involving the interface of syntax and discourse modules. CLRD relates a dislocated constituent to a discourse antecedent, requiring integration of syntax and pragmatics. Results from an acceptability judgment task did not support the predictions of the IH. No statistical differences between the HSs’ performance and that of L1-dominant native speakers were evidenced when participants were presented with an offline task. Thus, our study did not find any evidence of “incomplete acquisition” (Montrul 2008) as it pertains to this specific linguistic structure.
Resumo:
Nowadays in the world of mass consumption there is big demand for distributioncenters of bigger size. Managing such a center is a very complex and difficult taskregarding to the different processes and factors in a usual warehouse when we want tominimize the labor costs. Most of the workers’ working time is spent with travelingbetween source and destination points which cause deadheading. Even if a worker knowsthe structure of a warehouse well and because of that he or she can find the shortest pathbetween two points, it is still not guaranteed that there won’t be long traveling timebetween the locations of two consecutive tasks. We need optimal assignments betweentasks and workers.In the scientific literature Generalized Assignment Problem (GAP) is a wellknownproblem which deals with the assignment of m workers to n tasks consideringseveral constraints. The primary purpose of my thesis project was to choose a heuristics(genetic algorithm, tabu search or ant colony optimization) to be implemented into SAPExtended Warehouse Management (SAP EWM) by with task assignment will be moreeffective between tasks and resources.After system analysis I had to realize that due different constraints and businessdemands only 1:1 assingments are allowed in SAP EWM. Because of that I had to use adifferent and simpler approach – instead of the introduced heuristics – which could gainbetter assignments during the test phase in several cases. In the thesis I described indetails what ware the most important questions and problems which emerged during theplanning of my optimized assignment method.
Resumo:
A new managerial task arises in today’s working life: to provide conditions for and influence interaction between actors and thus to enable the emergence of organizing structure in tune with a changing environment. We call this the enabling managerial task. The goal of this paper is to study whether training first line managers in the enabling managerial task could lead to changes in the work for the subordinates. This paper presents results from questionnaires answered by the subordinates of the managers before and after the training. The training was organized as a learning network and consisted of eight workshops carried out over a period of one year (September 2009–June 2010), where the managers met with each other and the researchers once a month. Each workshop consisted of three parts, during three and a half hours. The first hour was devoted to joint reflection on a task that had been undertaken since the last workshop; some results were presented from the employee pre-assessments, followed by relevant theory and illuminating practices, finally the managers created new tasks for themselves to undertake during the following month. The subordinates’ answers show positive change in all of the seventeen scales used to assess it. The improvements are significant in scales measuring the relationship between the manager and the employees, as well as in those measuring interaction between employees. It is concluded that the result was a success for all managers that had the possibility of using the training in their management work.
Resumo:
This paper compares the effects on corporate performance and managerial self-dealing in a situation in which the CEO reports to a single Board that is responsible for both monitoring management and establishing performance targets to an alternative in which the CEO reports to two Boards, each responsible for a different task. The equilibrium set of the common agency game induced by the dual board structure is fully characterized. Compared to a single board, a dual board demands less aggressive performance targets from the CEO, but exerts more monitoring. A consequence of the first feature is that the CEO always exerts less effort toward production with a dual board. The effect of a dual board on CEO self-dealing is ambiguous: there are equilibria in which, in spite of the higher monitoring, self-dealing is higher in a dual system. The model indicates that the strategic interdependence generated by the assignment of different tasks to different boards may yield results that are far from the desired ones.
Resumo:
Incluye bibliografia
Resumo:
Numerical modeling of the interaction among waves and coastal structures is a challenge due to the many nonlinear phenomena involved, such as, wave propagation, wave transformation with water depth, interaction among incident and reflected waves, run-up / run-down and wave overtopping. Numerical models based on Lagrangian formulation, like SPH (Smoothed Particle Hydrodynamics), allow simulating complex free surface flows. The validation of these numerical models is essential, but comparing numerical results with experimental data is not an easy task. In the present paper, two SPH numerical models, SPHysics LNEC and SPH UNESP, are validated comparing the numerical results of waves interacting with a vertical breakwater, with data obtained in physical model tests made in one of the LNEC's flume. To achieve this validation, the experimental set-up is determined to be compatible with the Characteristics of the numerical models. Therefore, the flume dimensions are exactly the same for numerical and physical model and incident wave characteristics are identical, which allows determining the accuracy of the numerical models, particularly regarding two complex phenomena: wave-breaking and impact loads on the breakwater. It is shown that partial renormalization, i.e. renormalization applied only for particles near the structure, seems to be a promising compromise and an original method that allows simultaneously propagating waves, without diffusion, and modeling accurately the pressure field near the structure.
Resumo:
The aim of this study was to evaluate the effects of local, regional and temporal factors structuring fish assemblages in Meridional Amazonian streams during the months of May (rainy season) and August (dry season) of 2008. To accomplish this task, 14 streams located in Serra do Expedito (Aripuanã River basin) were sampled along 30-m stretches. A total of 3,212 specimens distributed among five orders, 18 families, and 55 species were recorded. The fish assemblage structure in the streams presented variation among types of riparian vegetation (local factor) and watersheds (regional factor), but did not present variation between seasons (temporal factor) and stream order (regional factor). Larger streams with margins covered with pasture presented higher species richness and abundance than smaller streams with margins covered with forest. © 2012 Springer Science+Business Media B.V.
Resumo:
The classification of texts has become a major endeavor with so much electronic material available, for it is an essential task in several applications, including search engines and information retrieval. There are different ways to define similarity for grouping similar texts into clusters, as the concept of similarity may depend on the purpose of the task. For instance, in topic extraction similar texts mean those within the same semantic field, whereas in author recognition stylistic features should be considered. In this study, we introduce ways to classify texts employing concepts of complex networks, which may be able to capture syntactic, semantic and even pragmatic features. The interplay between various metrics of the complex networks is analyzed with three applications, namely identification of machine translation (MT) systems, evaluation of quality of machine translated texts and authorship recognition. We shall show that topological features of the networks representing texts can enhance the ability to identify MT systems in particular cases. For evaluating the quality of MT texts, on the other hand, high correlation was obtained with methods capable of capturing the semantics. This was expected because the golden standards used are themselves based on word co-occurrence. Notwithstanding, the Katz similarity, which involves semantic and structure in the comparison of texts, achieved the highest correlation with the NIST measurement, indicating that in some cases the combination of both approaches can improve the ability to quantify quality in MT. In authorship recognition, again the topological features were relevant in some contexts, though for the books and authors analyzed good results were obtained with semantic features as well. Because hybrid approaches encompassing semantic and topological features have not been extensively used, we believe that the methodology proposed here may be useful to enhance text classification considerably, as it combines well-established strategies. (c) 2012 Elsevier B.V. All rights reserved.
Resumo:
The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.
Resumo:
Mainstream IDEs generally rely on the static structure of a software project to support browsing and navigation. We propose HeatMaps, a simple but highly configurable technique to enrich the way an IDE displays the static structure of a software system with additional kinds of information. A heatmap highlights software artifacts according to various metric values, such as bright red or pale blue, to indicate their potential degree of interest. We present a prototype system that implements heatmaps, and we describe an initial study that assesses the degree to which different heatmaps effectively guide developers in navigating software.
Resumo:
Current development platforms for designing spoken dialog services feature different kinds of strategies to help designers build, test, and deploy their applications. In general, these platforms are made up of several assistants that handle the different design stages (e.g. definition of the dialog flow, prompt and grammar definition, database connection, or to debug and test the running of the application). In spite of all the advances in this area, in general the process of designing spoken-based dialog services is a time consuming task that needs to be accelerated. In this paper we describe a complete development platform that reduces the design time by using different types of acceleration strategies based on using information from the data model structure and database contents, as well as cumulative information obtained throughout the successive steps in the design. Thanks to these accelerations, the interaction with the platform is simplified and the design is reduced, in most cases, to simple confirmations to the “proposals” that the platform automatically provides at each stage. Different kinds of proposals are available to complete the application flow such as the possibility of selecting which information slots should be requested to the user together, predefined templates for common dialogs, the most probable actions that make up each state defined in the flow, different solutions to solve specific speech-modality problems such as the presentation of the lists of retrieved results after querying the backend database. The platform also includes accelerations for creating speech grammars and prompts, and the SQL queries for accessing the database at runtime. Finally, we will describe the setup and results obtained in a simultaneous summative, subjective and objective evaluations with different designers used to test the usability of the proposed accelerations as well as their contribution to reducing the design time and interaction.
Resumo:
In professional video production, users have to access to huge multimedia files simultaneously in an error-free environment, this restriction force the use of expensive disk architectures for video servers. Previous researches proposed different RAID systems for each specific task (ingest, editing, file, play-out, etc.). Video production companies have to acquire different servers with different RAIDs systems in order to support each task in the production workflow. The solution has multiples disadvantages, duplicated material in several RAIDs, duplicated material for different qualities, transfer and transcoding processes, etc. In this work, an architecture for video servers based on the spreading of JPEG200 data in different RAIDs is presented, each individual part of the data structure goes to a specific RAID type depending on the effect that produces the data on the overall image quality, the method provide a redundancy correlated with the data rank. The global storage can be used in all the different tasks of the production workflow saving disk space, redundant files and transfers procedures.
Resumo:
Research is presented on the semantic structure of 15 emotion terms as measured by judged-similarity tasks for monolingual English-speaking and monolingual and bilingual Japanese subjects. A major question is the relative explanatory power of a single shared model for English and Japanese versus culture-specific models for each language. The data support a shared model for the semantic structure of emotion terms even though some robust and significant differences are found between English and Japanese structures. The Japanese bilingual subjects use a model more like English when performing tasks in English than when performing the same task in Japanese.
Resumo:
A method is given for determining the time course and spatial extent of consistently and transiently task-related activations from other physiological and artifactual components that contribute to functional MRI (fMRI) recordings. Independent component analysis (ICA) was used to analyze two fMRI data sets from a subject performing 6-min trials composed of alternating 40-sec Stroop color-naming and control task blocks. Each component consisted of a fixed three-dimensional spatial distribution of brain voxel values (a “map”) and an associated time course of activation. For each trial, the algorithm detected, without a priori knowledge of their spatial or temporal structure, one consistently task-related component activated during each Stroop task block, plus several transiently task-related components activated at the onset of one or two of the Stroop task blocks only. Activation patterns occurring during only part of the fMRI trial are not observed with other techniques, because their time courses cannot easily be known in advance. Other ICA components were related to physiological pulsations, head movements, or machine noise. By using higher-order statistics to specify stricter criteria for spatial independence between component maps, ICA produced improved estimates of the temporal and spatial extent of task-related activation in our data compared with principal component analysis (PCA). ICA appears to be a promising tool for exploratory analysis of fMRI data, particularly when the time courses of activation are not known in advance.
Resumo:
Europe is facing a double challenge: a significant need for long-term investments – crucial levers for economic growth – and a growing pension gap, both of which call for resolute action. Crucially, at a time when low interest rates and revised prudential standards strain the ability of life insurers and pension funds to offer guaranteed returns, Europe lacks a framework ensuring the quality and accessibility of long-term investment solutions for small retail investors and defined contribution pension plans. This report considers the potential to steer household financial wealth – accounting for over 60% of total financial wealth in Europe – towards long-term investing, which would achieve two goals at once: higher growth and higher pensions. It follows a holistic approach that considers both solution design – how to gear product structuring towards long-term investing – and market structure – how to engineer a competitive market setting that is able to deliver high-quality and cost-efficient solutions. The report also considers prudential rules for insurers and pension funds and the potential to build a single market for less-liquid funds, occupational and personal pensions, with improved investor protection. It urges policy-makers to act aggressively to deliver more inclusive, efficient and resilient retail investment markets that are better equipped and more committed to deliver value over the long-term for beneficiaries.