27 resultados para Closed Convex Process


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We live in a changing world. At an impressive speed, every day new technological resources appear. We increasingly use the Internet to obtain and share information, and new online communication tools are emerging. Each of them encompasses new potential and creates new audiences. In recent years, we witnessed the emergence of Facebook, Twitter, YouTube and other media platforms. They have provided us with an even greater interactivity between sender and receiver, as well as generated a new sense of community. At the same time we also see the availability of content like it never happened before. We are increasingly sharing texts, videos, photos, etc. This poster intends to explore the potential of using these new online communication tools in the cultural sphere to create new audiences, to develop of a new kind of community, to provide information as well as different ways of building organizations’ memory. The transience of performing arts is accompanied by the need to counter that transience by means of documentation. This desire to ‘save’ events reaches its expression with the information archive of the different production moments as well as the opportunity to record the event and present it through, for instance, digital platforms. In this poster we intend to answer the following questions: which online communication tools are being used to engage audiences in the cultural sphere (specifically between theater companies in Lisbon)? Is there a new relationship with the public? Are online communication tools creating a new kind of community? What changes are these tools introducing in the creative process? In what way the availability of content and its archive contribute to the organization memory? Among several references, we will approach the two-way communication model that James E. Grunig & Todd T. Hunt (1984) already presented and the concept of mass self-communication of Manuel Castells (2010). Castells also tells us that we have moved from traditional media to a system of communication networks. For Scott Kirsner (2010), we have entered an era of digital creativity, where artists have the tools to do what they imagined and the public no longer wants to just consume cultural goods, but instead to have a voice and participate. The creativity process is now depending on the public choice as they wander through the screen. It is the receiver who owns an object which can be exchanged. Virtual reality has encouraged the receiver to abandon its position of passive observer and to become a participant agent, which implies a challenge to organizations: inventing new forms of interfaces. Therefore, we intend to find new and effective online tools that can be used by cultural organizations; the best way to manage them; to show how organizations can create a community with the public and how the availability of online content and its archive can contribute to the organizations’ memory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this paper is to evaluate the influence of the crushing process used to obtain recycled concrete aggregates on the performance of concrete made with those aggregates. Two crushing methods were considered: primary crushing, using a jaw crusher, and primary plus secondary crushing (PSC), using a jaw crusher followed by a hammer mill. Besides natural aggregates (NA), these two processes were also used to crush three types of concrete made in laboratory (L20, L45 e L65) and three more others from the precast industry (P20, P45 e P65). The coarse natural aggregates were totally replaced by coarse recycled concrete aggregates. The recycled aggregates concrete mixes were compared with reference concrete mixes made using only NA, and the following properties related to the mechanical and durability performance were tested: compressive strength; splitting tensile strength; modulus of elasticity; carbonation resistance; chloride penetration resistance; water absorption by capillarity; water absorption by immersion; and shrinkage. The results show that the PSC process leads to better performances, especially in the durability properties. © 2014 RILEM

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mestrado em Intervenção Sócio-Organizacional na Saúde - Área de especialização: Políticas de Administração e Gestão de Serviços de Saúde

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wythoff Queens is a classical combinatorial game related to very interesting mathematical results. An amazing one is the fact that the P-positions are given by (⌊├ φn⌋┤┤,├ ├ ⌊φ┤^2 n⌋) and (⌊├ φ^2 n⌋┤┤,├ ├ ⌊φ┤n⌋) where φ=(1+√5)/2. In this paper, we analyze a different version where one player (Left) plays with a chess bishop and the other (Right) plays with a chess knight. The new game (call it Chessfights) lacks a Beatty sequence structure in the P-positions as in Wythoff Queens. However, it is possible to formulate and prove some general results of a general recursive law which is a particular case of a Partizan Subtraction game.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our aim was to analyse the impact of the characteristics of words used in spelling programmes and the nature of instructional guidelines on the evolution from grapho-perceptive writing to phonetic writing in preschool children. The participants were 50 5-year-old children, divided in five equivalent groups in intelligence, phonological skills and spelling. All the children knew the vowels and the consonants B, D, P, R, T, V, F, M and C, but didn’t use them on spelling. Their spelling was evaluated in a pre and post-test with 36 words beginning with the consonants known. In-between they underwent a writing programme designed to lead them to use the letters P and T to represent the initial phonemes of words. The groups differed on the kind of words used on training (words whose initial syllable matches the name of the initial letter—Exp. G1 and Exp. G2—versus words whose initial syllable is similar to the sound of the initial letter—Exp. G3 and Exp. G4). They also differed on the instruction used in order to lead them to think about the relations between the initial phoneme of words and the initial consonant (instructions designed to make the children think about letter names—Exp. G1 and Exp. G3 —versus instructions designed to make the children think about letter sounds—Exp. G2 and Exp. G4). The 5th was a control group. All the children evolved to syllabic phonetisations spellings. There are no differences between groups at the number of total phonetisations but we found some differences between groups at the quality of the phonetisations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this paper is to evaluate the influence of the crushing process used to obtain recycled concrete aggregates on the performance of concrete made with those aggregates. Two crushing methods were considered: primary crushing, using a jaw crusher, and primary plus secondary crushing (PSC), using a jaw crusher followed by a hammer mill. Besides natural aggregates (NA), these two processes were also used to crush three types of concrete made in laboratory (L20, L45 e L65) and three more others from the precast industry (P20, P45 e P65). The coarse natural aggregates were totally replaced by coarse recycled concrete aggregates. The recycled aggregates concrete mixes were compared with reference concrete mixes made using only NA, and the following properties related to the mechanical and durability performance were tested: compressive strength; splitting tensile strength; modulus of elasticity; carbonation resistance; chloride penetration resistance; water absorption by capillarity; water absorption by immersion; and shrinkage. The results show that the PSC process leads to better performances, especially in the durability properties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper refers to the assessment on site by semi-destructive testing (SDT) methods of the consolidation efficiency of a conservation process developed by Henriques (2011) for structural and non-structural pine wood elements in service. This study was applied on scots pine wood (Pinus sylvestris L.) degraded by fungi after treatment with a biocidal product followed by consolidation with a polymeric product. This solution avoids substitutions of wood moderately degraded by fungi, improving its physical and mechanical characteristics. The consolidation efficiency was assessed on site by methods of drill resistance and penetration resistance. The SDT methods used showed good sensitivity to the conservation process and could evaluate their effectiveness. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses the results of applied research on the eco-driving domain based on a huge data set produced from a fleet of Lisbon's public transportation buses for a three-year period. This data set is based on events automatically extracted from the control area network bus and enriched with GPS coordinates, weather conditions, and road information. We apply online analytical processing (OLAP) and knowledge discovery (KD) techniques to deal with the high volume of this data set and to determine the major factors that influence the average fuel consumption, and then classify the drivers involved according to their driving efficiency. Consequently, we identify the most appropriate driving practices and styles. Our findings show that introducing simple practices, such as optimal clutch, engine rotation, and engine running in idle, can reduce fuel consumption on average from 3 to 5l/100 km, meaning a saving of 30 l per bus on one day. These findings have been strongly considered in the drivers' training sessions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Solution enthalpies of 18-crown-6 have been obtained for a set of 14 protic and aprotic solvents at 298.15 K. The complementary use of Solomonov's methodology and a QSPR-based approach allowed the identification of the most significant solvent descriptors that model the interaction enthalpy contribution of the solution process (Delta H-int(A/S)). Results were compared with data previously obtained for 1,4-dioxane. Although the interaction enthalpies of 18-crown-6 correlate well with those of 1,4-dioxane, the magnitude of the most relevant parameters, pi* and beta, is almost three times higher for 18-crown-6. This is rationalized in terms of the impact of the solute's volume in the solution processes of both compounds. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work intends to evaluate the (mechanical and durability) performance of concrete made with coarse recycled concrete aggregates (CRCA) obtained using two crushing processes: primary crushing (PC) and primary plus secondary crushing (PSC). This analysis intends to select the most efficient production process of recycled aggregates (RA). The RA used here resulted from precast products (P), with strength classes of 20 MPa, 45 MPa and 65 MPa, and from laboratory-made concrete (L) with the same compressive strengths. The evaluation of concrete was made with the following tests: compressive strength; splitting tensile strength; modulus of elasticity; carbona-tion resistance; chloride penetration resistance; capillary water absorption; and water absorption by immersion. These findings contribute to a solid and innovative basis that allows the precasting industry to use without restrictions the waste it generates. © (2015) Trans Tech Publications, Switzerland.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.