868 resultados para extended mind
Resumo:
A competitividade entre as empresas e a busca por modelos de gestão e organização cada vez mais eficientes, tem dominado a actualidade. A filosofia de gestão Lean vem dar resposta a essas necessidades de aumento dos níveis de competitividade e eficiência, através de uma mudança da cultura organizacional, que assenta na redução ou eliminação de desperdícios e na melhoria contínua dos processos de fabrico de bens ou do fornecimento de serviços. A gestão Lean é suportada e implementada pela aplicação de um conjunto de ferramentas correctamente seleccionadas e adaptadas ao contexto organizacional da empresa ou organização. A presente dissertação visa caracterizar as ferramentas mais comuns da filosofia Lean, tendo em consideração a sua aplicabilidade na indústria e no sector dos serviços. É igualmente abordada a forma de aplicação das ferramentas Lean de maneira a que não constituam um acto isolado que conduz seguramente ao fracasso da implementação Lean na organização. Por essa razão são discutidas algumas regras e critérios, com base na proposta de um método de aplicação das ferramentas Lean que evite erros cometidos no passado e que levaram ao insucesso da aplicação do Lean em algumas organizações. Recorreu-se a um estudo de caso do ramo dos serviços, cujos resultados permitiram verificar a aplicabilidade do método proposto na aplicação de ferramentas Lean ao ramo dos serviços. O estudo de caso revelou a existência de uma elevada percentagem de desperdícios no processo em análise e permitiu melhorar o funcionamento desses mesmos processos. As melhorias alcançadas foram realizadas com base na eliminação dos desperdícios, na resolução de problemas e consequente uniformização de processos que melhoraram a qualidade e eficiência do serviço prestado, evidenciando que a organização alvo do estudo se encontra no bom caminho para atingir com sucesso a alteração da cultura organizacional para a filosofia Lean.
Resumo:
During the past 15 years, emergence and dissemination of third-generation cephalosporins resistance in nosocomial Enterobacteriaceae became a serious problem worldwide, due to the production of extended-spectrum-β-lactamases (ESBLs). The aim of this study was to investigate among the presence of ESBL-producing enterobacteria among Portuguese clinical isolates nearby Spain, to investigate the antimicrobial susceptibility patterns and to compare the two countries. The β-lactamases genes, blaTEM, blaSHV and blaCTX-M were detected by molecular methods. Among the ESBL-producing isolates it was found extraordinary levels (98.9%) of resistance to the fourth-generation cephalosporin Cefepime. These findings point to the need of reevaluate the definition of ESBL.
Resumo:
Component joining is typically performed by welding, fastening, or adhesive-bonding. For bonded aerospace applications, adhesives must withstand high-temperatures (200°C or above, depending on the application), which implies their mechanical characterization under identical conditions. The extended finite element method (XFEM) is an enhancement of the finite element method (FEM) that can be used for the strength prediction of bonded structures. This work proposes and validates damage laws for a thin layer of an epoxy adhesive at room temperature (RT), 100, 150, and 200°C using the XFEM. The fracture toughness (G Ic ) and maximum load ( ); in pure tensile loading were defined by testing double-cantilever beam (DCB) and bulk tensile specimens, respectively, which permitted building the damage laws for each temperature. The bulk test results revealed that decreased gradually with the temperature. On the other hand, the value of G Ic of the adhesive, extracted from the DCB data, was shown to be relatively insensitive to temperature up to the glass transition temperature (T g ), while above T g (at 200°C) a great reduction took place. The output of the DCB numerical simulations for the various temperatures showed a good agreement with the experimental results, which validated the obtained data for strength prediction of bonded joints in tension. By the obtained results, the XFEM proved to be an alternative for the accurate strength prediction of bonded structures.
Resumo:
Consider the problem of non-migratively scheduling a set of implicit-deadline sporadic tasks to meet all deadlines on a two-type heterogeneous multiprocessor platform. We ask the following question: Does there exist a phase transition behavior for the two-type heterogeneous multiprocessor scheduling problem? We also provide some initial observations via simulations performed on randomly generated task sets.
Resumo:
Adhesive-bonding for the unions in multi-component structures is gaining momentum over welding, riveting and fastening. It is vital for the design of bonded structures the availability of accurate damage models, to minimize design costs and time to market. Cohesive Zone Models (CZM’s) have been used for fracture prediction in structures. The eXtended Finite Element Method (XFEM) is a recent improvement of the Finite Element Method (FEM) that relies on traction-separation laws similar to those of CZM’s but it allows the growth of discontinuities within bulk solids along an arbitrary path, by enriching degrees of freedom. This work proposes and validates a damage law to model crack propagation in a thin layer of a structural epoxy adhesive using the XFEM. The fracture toughness in pure mode I (GIc) and tensile cohesive strength (sn0) were defined by Double-Cantilever Beam (DCB) and bulk tensile tests, respectively, which permitted to build the damage law. The XFEM simulations of the DCB tests accurately matched the experimental load-displacement (P-d) curves, which validated the analysis procedure.
Resumo:
The purpose of this paper is to discuss the linear solution of equality constrained problems by using the Frontal solution method without explicit assembling. Design/methodology/approach - Re-written frontal solution method with a priori pivot and front sequence. OpenMP parallelization, nearly linear (in elimination and substitution) up to 40 threads. Constraints enforced at the local assembling stage. Findings - When compared with both standard sparse solvers and classical frontal implementations, memory requirements and code size are significantly reduced. Research limitations/implications - Large, non-linear problems with constraints typically make use of the Newton method with Lagrange multipliers. In the context of the solution of problems with large number of constraints, the matrix transformation methods (MTM) are often more cost-effective. The paper presents a complete solution, with topological ordering, for this problem. Practical implications - A complete software package in Fortran 2003 is described. Examples of clique-based problems are shown with large systems solved in core. Social implications - More realistic non-linear problems can be solved with this Frontal code at the core of the Newton method. Originality/value - Use of topological ordering of constraints. A-priori pivot and front sequences. No need for symbolic assembling. Constraints treated at the core of the Frontal solver. Use of OpenMP in the main Frontal loop, now quantified. Availability of Software.
Resumo:
This paper presents a collaborative virtual learning environment, which includes technologies such as 3D virtual representations, learning and content management systems, remote experiments, and collaborative learning spaces, among others. It intends to facilitate the construction, management and sharing of knowledge among teachers and students, in a global perspective. The environment proposes the use of 3D social representations for accessing learning materials in a dynamic and interactive form, which is regarded to be closer to the physical reality experienced by teachers and students in a learning context. A first implementation of the proposed extended immersive learning environment, in the area of solid mechanics, is also described, including the access to theoretical contents and a remote experiment to determine the elastic modulus of a given object.These instructions give you basic guidelines for preparing camera-ready papers for conference proceedings. Use this document as a template if you are using Microsoft Word 6.0 or later. Otherwise, use this document as an instruction set. The electronic file of your paper will be formatted further. Define all symbols used in the abstract. Do not cite references in the abstract.
Resumo:
The structural integrity of multi-component structures is usually determined by the strength and durability of their unions. Adhesive bonding is often chosen over welding, riveting and bolting, due to the reduction of stress concentrations, reduced weight penalty and easy manufacturing, amongst other issues. In the past decades, the Finite Element Method (FEM) has been used for the simulation and strength prediction of bonded structures, by strength of materials or fracture mechanics-based criteria. Cohesive-zone models (CZMs) have already proved to be an effective tool in modelling damage growth, surpassing a few limitations of the aforementioned techniques. Despite this fact, they still suffer from the restriction of damage growth only at predefined growth paths. The eXtended Finite Element Method (XFEM) is a recent improvement of the FEM, developed to allow the growth of discontinuities within bulk solids along an arbitrary path, by enriching degrees of freedom with special displacement functions, thus overcoming the main restriction of CZMs. These two techniques were tested to simulate adhesively bonded single- and double-lap joints. The comparative evaluation of the two methods showed their capabilities and/or limitations for this specific purpose.
Resumo:
Prototype validation is a major concern in modern electronic product design and development. Simulation, structural test, functional and timing debug are all forming parts of the validation process, although very often addressed as dissociated tasks. In this paper we describe an integrated approach to board-level prototype validation, based on a set of mandatory/optional BST instructions and a built-in controller for debug and test, that addresses the late mentioned tasks as inherent parts of a whole process
Resumo:
Crítica a vários espectáculos apresentados no Warszawskie Spotkania Teatralne, em Varsóvia (Polónia), 2012.
Resumo:
Motivated by the dark matter and the baryon asymmetry problems, we analyze a complex singlet extension of the Standard Model with a Z(2) symmetry (which provides a dark matter candidate). After a detailed two-loop calculation of the renormalization group equations for the new scalar sector, we study the radiative stability of the model up to a high energy scale (with the constraint that the 126 GeV Higgs boson found at the LHC is in the spectrum) and find it requires the existence of a new scalar state mixing with the Higgs with a mass larger than 140 GeV. This bound is not very sensitive to the cutoff scale as long as the latter is larger than 10(10) GeV. We then include all experimental and observational constraints/measurements from collider data, from dark matter direct detection experiments, and from the Planck satellite and in addition force stability at least up to the grand unified theory scale, to find that the lower bound is raised to about 170 GeV, while the dark matter particle must be heavier than about 50 GeV.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Mestrado em Ciências Jurídicas empresariais
Resumo:
Mental practice is an internal reproduction of a motor act (whose intention is to promote learning and improving motor skills). Some studies have shown that other cognitive strategies also increase the strength and muscular resistance in healthy people by the enhancement of the performance during dynamic tasks. Mental training sessions may be primordial to improving muscle strength in different subjects. The aim of this study was to systematically review and meta-analiyze studies that assessed whether mental practice is effective in improving muscular strength. We conducted an electronic-computed search in Pub-Med/Medline and ISI Web of Knowledge, Scielo and manual searchs, searching papers written in English between 1991 and 2014. There were 44 studies in Pub-Med/Medline, 631 in ISI Web of Knowledge, 11 in Scielo and 3 in manual searchs databases. After exclusion of studies for duplicate, unrelated to the topic by title and summary, different samples and methodologies, a meta-analysis of 4 studies was carried out to identify the dose-response relationship. We did not find evidence that mental practice is effective in increasing strength in healthy individuals. There is no evidence that mental practice alone can be effective to induce strength gains or to optimize the training effects.