62 resultados para Structure learning


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Develop a new model of Absorptive Capacity taking into account two variables namely Learning and knowledge to explain how companies transform information into knowledge

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Novel alternating copolymers comprising biscalix[4]arene-p-phenylene ethynylene and m-phenylene ethynylene units (CALIX-m-PPE) were synthesized using the Sonogashira-Hagihara cross-coupling polymerization. Good isolated yields (60-80%) were achieved for the polymers that show M-n ranging from 1.4 x 10(4) to 5.1 x 10(4) gmol(-1) (gel permeation chromatography analysis), depending on specific polymerization conditions. The structural analysis of CALIX-m-PPE was performed by H-1, C-13, C-13-H-1 heteronuclear single quantum correlation (HSQC), C-13-H-1 heteronuclear multiple bond correlation (HMBC), correlation spectroscopy (COSY), and nuclear overhauser effect spectroscopy (NOESY) in addition to Fourier transform-Infrared spectroscopy and microanalysis allowing its full characterization. Depending on the reaction setup, variable amounts (16-45%) of diyne units were found in polymers although their photophysical properties are essentially the same. It is demonstrated that CALIX-m-PPE does not form ground-or excited-state interchain interactions owing to the highly crowded environment of the main-chain imparted by both calix[4]arene side units which behave as insulators inhibiting main-chain pi-pi staking. It was also found that the luminescent properties of CALIX-m-PPE are markedly different from those of an all-p-linked phenylene ethynylene copolymer (CALIX-p-PPE) previously reported. The unexpected appearance of a low-energy emission band at 426 nm, in addition to the locally excited-state emission (365 nm), together with a quite low fluorescence quantum yield (Phi = 0.02) and a double-exponential decay dynamics led to the formulation of an intramolecular exciplex as the new emissive species.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the Sparse Point Representation (SPR) method the principle is to retain the function data indicated by significant interpolatory wavelet coefficients, which are defined as interpolation errors by means of an interpolating subdivision scheme. Typically, a SPR grid is coarse in smooth regions, and refined close to irregularities. Furthermore, the computation of partial derivatives of a function from the information of its SPR content is performed in two steps. The first one is a refinement procedure to extend the SPR by the inclusion of new interpolated point values in a security zone. Then, for points in the refined grid, such derivatives are approximated by uniform finite differences, using a step size proportional to each point local scale. If required neighboring stencils are not present in the grid, the corresponding missing point values are approximated from coarser scales using the interpolating subdivision scheme. Using the cubic interpolation subdivision scheme, we demonstrate that such adaptive finite differences can be formulated in terms of a collocation scheme based on the wavelet expansion associated to the SPR. For this purpose, we prove some results concerning the local behavior of such wavelet reconstruction operators, which stand for SPR grids having appropriate structures. This statement implies that the adaptive finite difference scheme and the one using the step size of the finest level produce the same result at SPR grid points. Consequently, in addition to the refinement strategy, our analysis indicates that some care must be taken concerning the grid structure, in order to keep the truncation error under a certain accuracy limit. Illustrating results are presented for 2D Maxwell's equation numerical solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is an elaboration of the DECA algorithm [1] to blindly unmix hyperspectral data. The underlying mixing model is linear, meaning that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. The proposed method, as DECA, is tailored to highly mixed mixtures in which the geometric based approaches fail to identify the simplex of minimum volume enclosing the observed spectral vectors. We resort then to a statitistical framework, where the abundance fractions are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. With respect to DECA, we introduce two improvements: 1) the number of Dirichlet modes are inferred based on the minimum description length (MDL) principle; 2) The generalized expectation maximization (GEM) algorithm we adopt to infer the model parameters is improved by using alternating minimization and augmented Lagrangian methods to compute the mixing matrix. The effectiveness of the proposed algorithm is illustrated with simulated and read data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wyner - Ziv (WZ) video coding is a particular case of distributed video coding (DVC), the recent video coding paradigm based on the Slepian - Wolf and Wyner - Ziv theorems which exploits the source temporal correlation at the decoder and not at the encoder as in predictive video coding. Although some progress has been made in the last years, WZ video coding is still far from the compression performance of predictive video coding, especially for high and complex motion contents. The WZ video codec adopted in this study is based on a transform domain WZ video coding architecture with feedback channel-driven rate control, whose modules have been improved with some recent coding tools. This study proposes a novel motion learning approach to successively improve the rate-distortion (RD) performance of the WZ video codec as the decoding proceeds, making use of the already decoded transform bands to improve the decoding process for the remaining transform bands. The results obtained reveal gains up to 2.3 dB in the RD curves against the performance for the same codec without the proposed motion learning approach for high motion sequences and long group of pictures (GOP) sizes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A organização automática de mensagens de correio electrónico é um desafio actual na área da aprendizagem automática. O número excessivo de mensagens afecta cada vez mais utilizadores, especialmente os que usam o correio electrónico como ferramenta de comunicação e trabalho. Esta tese aborda o problema da organização automática de mensagens de correio electrónico propondo uma solução que tem como objectivo a etiquetagem automática de mensagens. A etiquetagem automática é feita com recurso às pastas de correio electrónico anteriormente criadas pelos utilizadores, tratando-as como etiquetas, e à sugestão de múltiplas etiquetas para cada mensagem (top-N). São estudadas várias técnicas de aprendizagem e os vários campos que compõe uma mensagem de correio electrónico são analisados de forma a determinar a sua adequação como elementos de classificação. O foco deste trabalho recai sobre os campos textuais (o assunto e o corpo das mensagens), estudando-se diferentes formas de representação, selecção de características e algoritmos de classificação. É ainda efectuada a avaliação dos campos de participantes através de algoritmos de classificação que os representam usando o modelo vectorial ou como um grafo. Os vários campos são combinados para classificação utilizando a técnica de combinação de classificadores Votação por Maioria. Os testes são efectuados com um subconjunto de mensagens de correio electrónico da Enron e um conjunto de dados privados disponibilizados pelo Institute for Systems and Technologies of Information, Control and Communication (INSTICC). Estes conjuntos são analisados de forma a perceber as características dos dados. A avaliação do sistema é realizada através da percentagem de acerto dos classificadores. Os resultados obtidos apresentam melhorias significativas em comparação com os trabalhos relacionados.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reinforcement Learning is an area of Machine Learning that deals with how an agent should take actions in an environment such as to maximize the notion of accumulated reward. This type of learning is inspired by the way humans learn and has led to the creation of various algorithms for reinforcement learning. These algorithms focus on the way in which an agent’s behaviour can be improved, assuming independence as to their surroundings. The current work studies the application of reinforcement learning methods to solve the inverted pendulum problem. The importance of the variability of the environment (factors that are external to the agent) on the execution of reinforcement learning agents is studied by using a model that seeks to obtain equilibrium (stability) through dynamism – a Cart-Pole system or inverted pendulum. We sought to improve the behaviour of the autonomous agents by changing the information passed to them, while maintaining the agent’s internal parameters constant (learning rate, discount factors, decay rate, etc.), instead of the classical approach of tuning the agent’s internal parameters. The influence of changes on the state set and the action set on an agent’s capability to solve the Cart-pole problem was studied. We have studied typical behaviour of reinforcement learning agents applied to the classic BOXES model and a new form of characterizing the environment was proposed using the notion of convergence towards a reference value. We demonstrate the gain in performance of this new method applied to a Q-Learning agent.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The crustal and lithospheric mantle structure at the south segment of the west Iberian margin was investigated along a 370 km long seismic transect. The transect goes from unthinned continental crust onshore to oceanic crust, crossing the ocean-continent transition (OCT) zone. The wide-angle data set includes recordings from 6 OBSs and 2 inland seismic stations. Kinematic and dynamic modeling provided a 2D velocity model that proved to be consistent with the modeled free-air anomaly data. The interpretation of coincident multi-channel near-vertical and wide-angle reflection data sets allowed the identification of four main crustal domains: (i) continental (east of 9.4 degrees W); (ii) continental thinning (9.4 degrees W-9.7 degrees W): (iii) transitional (9.7 degrees W-similar to 10.5 degrees W); and (iv) oceanic (west of similar to 10.5 degrees W). In the continental domain the complete crustal section of slightly thinned continental crust is present. The upper (UCC, 5.1-6.0 km/s) and the lower continental crust (LCC, 6.9-7.2 km/s) are seismically reflective and have intermediate to low P-wave velocity gradients. The middle continental crust (MCC, 6.35-6.45 km/s) is generally unreflective with low velocity gradient. The main thinning of the continental crust occurs in the thinning domain by attenuation of the UCC and the LCC. Major thinning of the MCC starts to the west of the LCC pinchout point, where it rests directly upon the mantle. In the thinning domain the Moho slope is at least 13 degrees and the continental crust thickness decreases seaward from 22 to 11 km over a similar to 35 km distance, stretched by a factor of 1.5 to 3. In the oceanic domain a two-layer high-gradient igneous crust (5.3-6.0 km/s; 6.5-7.4 km/s) was modeled. The intra-crustal interface correlates with prominent mid-basement, 10-15 km long reflections in the multi-channel seismic profile. Strong secondary reflected PmP phases require a first order discontinuity at the Moho. The sedimentary cover can be as thick as 5 km and the igneous crustal thickness varies from 4 to 11 km in the west, where the profile reaches the Madeira-Tore Rise. In the transitional domain the crust has a complex structure that varies both horizontally and vertically. Beneath the continental slope it includes exhumed continental crust (6.15-6.45 km/s). Strong diffractions were modeled to originate at the lower interface of this layer. The western segment of this transitional domain is highly reflective at all levels, probably due to dykes and sills, according to the high apparent susceptibility and density modeled at this location. Sub-Moho mantle velocity is found to be 8.0 km/s, but velocities smaller than 8.0 km/s confined to short segments are not excluded by the data. Strong P-wave wide-angle reflections are modeled to originate at depth of 20 km within the lithospheric mantle, under the eastern segment of the oceanic domain, or even deeper at the transitional domain, suggesting a layered structure for the lithospheric mantle. Both interface depths and velocities of the continental section are in good agreement to the conjugate Newfoundland margin. A similar to 40 km wide OCT having a geophysical signature distinct from the OCT to the north favors a two pulse continental breakup.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report in this paper the recent advances we obtained in optimizing a color image sensor based on the laser-scanned-photodiode (LSP) technique. A novel device structure based on a a-SiC:H/a-Si:H pin/pin tandem structure has been tested for a proper color separation process that takes advantage on the different filtering properties due to the different light penetration depth at different wavelengths a-SM and a-SiC:H. While the green and the red images give, in comparison with previous tested structures, a weak response, this structure shows a very good recognition of blue color under reverse bias, leaving a good margin for future device optimization in order to achieve a complete and satisfactory RGB image mapping. Experimental results about the spectral collection efficiency are presented and discussed from the point of view of the color sensor applications. The physics behind the device functioning is explained by recurring to a numerical simulation of the internal electrical configuration of the device.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents preliminary results in the study of a novel structure for a laser scanned photodiode (LSP) type of image sensor. In order to increase the signal output, a stacked p-i-n-p-i-n structure with an intermediate light-blocking layer is used. The image and the scanning beam are incident through opposite sides of the sensor and their absorption is kept in separate junctions by an intermediate light-blocking layer. As in the usual LSP structure the scanning beam-induced photocurrent is dependent on the local illumination conditions of the image. The main difference between the two structures arises from the fact that in this new structure the image and the scanner have different optical paths leading to an increase in the photocurrent when the scanning beam is incident on a region illuminated on the image side of the sensor, while a decreasing in the photocurrent was observed in the single junction LSP. The results show that the structure can be successfully used as an image sensor even though some optimization is needed to enhance the performance of the device.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Projeto de intervênção apresentado à Escola Superior de Educação para a obtenção do grau de mestre em Didática da Língua Portuguesa em 1º e 2º Ciclos do Ensino Básico

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As teachers, we are challenged everyday to solve pedagogical problems and we have to fight for our students’ attention in a media rich world. I will talk about how we use ICT in Initial Teacher Training and give you some insight on what we are doing. The most important benefit of using ICT in education is that it makes us reflect on our practice. There is no doubt that our classrooms need to be updated, but we need to be critical about every peace of hardware, software or service that we bring into them. It is not only because our budgets are short, but also because e‐learning is primarily about learning, not technology. Therefore, we need to have the knowledge and skills required to act in different situations, and choose the best tool for the job. Not all subjects are suitable for e‐learning, nor do all students have the skills to organize themselves their own study times. Also not all teachers want to spend time programming or learning about instructional design and metadata. The promised land of easy use of authoring tools (e.g. eXe and Reload) that will lead to all teachers become Learning Objects authors and share these LO in Repositories, all this failed, like previously HyperCard, Toolbook and others. We need to know a little bit of many different technologies so we can mobilize this knowledge when a situation requires it: integrate e‐learning technologies in the classroom, not a flipped classroom, just simple tools. Lecture capture, mobile phones and smartphones, pocket size camcorders, VoIP, VLE, live video broadcast, screen sharing, free services for collaborative work, save, share and sync your files. Do not feel stressed to use everything, every time. Just because we have a whiteboard does not mean we have to make it the centre of the classroom. Start from where you are, with your preferred subject and the tools you master. Them go slowly and try some new tool in a non‐formal situation and with just one or two students. And you don’t need to be alone: subscribe a mailing list and share your thoughts with other teachers in a dedicated forum, even better if both are part of a community of practice, and share resources. We did that for music teachers and it was a success, in two years arriving at 1.000 members. Just do it.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mestrado em Intervenção Sócio-Organizacional na Saúde - Área de especialização: Políticas de Administração e Gestão de Serviços de Saúde.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mestrado em Intervenção Sócio-Organizacional na Saúde - Área de especialização: Políticas de Administração e Gestão de Serviços de Saúde.