905 resultados para One-Step Learning


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Since the beginning of Internet, Internet Service Providers (ISP) have seen the need of giving to users? traffic different treatments defined by agree- ments between ISP and customers. This procedure, known as Quality of Service Management, has not much changed in the last years (DiffServ and Deep Pack-et Inspection have been the most chosen mechanisms). However, the incremen-tal growth of Internet users and services jointly with the application of recent Ma- chine Learning techniques, open up the possibility of going one step for-ward in the smart management of network traffic. In this paper, we first make a survey of current tools and techniques for QoS Management. Then we intro-duce clustering and classifying Machine Learning techniques for traffic charac-terization and the concept of Quality of Experience. Finally, with all these com-ponents, we present a brand new framework that will manage in a smart way Quality of Service in a telecom Big Data based scenario, both for mobile and fixed communications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

O consumidor contemporâneo, inserido em um novo ambiente de comunicação, potencializa suas expressões, capaz de avaliar uma marca ou produto e transmitir sua opinião pelas redes sociais, ou seja, o consumidor expressa suas opiniões e desejos dialogando com seus pares de forma espontânea nas redes sociais on-line. É neste ambiente de participação e interação (ciberespaço) que está nosso objeto de estudo, o boca a boca on-line – a voz do consumidor contemporâneo, também conhecido como uma manifestação informativa pessoal ou uma conversa, a opinion sharing. Proporcionado pelos consumidores nas redes sociais on-line, o boca a boca se fortalece em função das possibilidades de interação, característica da sociedade em rede. Nesse cenário, oobjetivo desta pesquisa é caracterizar o boca a boca on-line como um novo fluxo comunicacional entre consumidores, hoje potencializado pelas novas tecnologias da comunicação, capazes de alterar a percepção da marca e demonstrar o uso, pelas marcas, das redes sociais on-line ainda como um ambiente de comunicação unidirecional. Mediante três casos selecionados por conveniência (dois casos nacionais e um internacional), o corpus de análise de nossa pesquisa se limitou aos 5.084 comentários disponibilizados após publicação de matérias jornalísticas no Portal G1 e nas fanpages (Facebook), ambos relativos aos casos selecionados. Com a Análise de Conteúdo dos posts, identificamos e categorizamos a fala do consumidor contemporâneo, sendo assim possível comprovar que as organizações/marcas se valem da cultura do massivo, não dialogando com seus consumidores, pois utilizam as redes sociais on-line ainda de forma unidirecional, além de não darem a devida atenção ao atual fluxo onde se evidencia a opinião compartilhada dos consumidores da sociedade em rede.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Undergraduate psycholog)' students from stepfamilies (always one step and one biological parent) and biologically intact families (always both biological parents) participated in this study. The goal was to assess perceptions of stepfamilies (N = 106, Nstepfamilies = 44, Nbiological = 62, age range = 17.17 to 28.92 years, M = 19.46 years). One theoretical perspective, the social stigma h)'pothesis, argues that there is a stigma attached to stepfamilies, or that stepfamilies are consistentiy associated with negative stereotypes. In the current study, participants were assessed on a number of variables, including a semantic differential scale, a perceived conflict scale and a perceived general satisfaction scale. It was found that a consistently negative view of stepfamilies was prevalent. Furthermore, the negative stereotypes existed, irrespective of participant family type. Results support the theoretical view that stepfamilies are stereotypically viewed as negative, when compared to biological families.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose – The literature on interfirm networks devotes scant attention to the ways collaborating firms combine and integrate the knowledge they share and to the subsequent learning outcomes. This study aims to investigate how motorsport companies use network ties to share and recombine knowledge and the learning that occurs both at the organizational and dyadic network levels. Design/methodology/approach – The paper adopts a qualitative and inductive approach with the aim of developing theory from an in-depth examination of the dyadic ties between motorsport companies and the way they share and recombine knowledge. Findings – The research shows that motorsport companies having substantial competences at managing knowledge flows do so by getting advantage of bridging ties. While bridging ties allow motorsport companies to reach distant and diverse sources of knowledge, their strengthening and the formation of relational capital facilitate the mediation and overlapping of that knowledge. Research limitations/implications – The analysis rests on a qualitative account in a single industry and does not take into account different types of inter-firm networks (e.g. alliances; constellations; consortia etc.) and governance structures. Cross-industry analyses may provide a more fine-grained picture of the practices used to recombine knowledge and the ideal composition of inter-firm ties. Practical implications – This study provides some interesting implications for scholars and managers concerned with the management of innovation activities at the interfirm level. From a managerial point of view, the recognition of the different roles played by network spanning connections is particularly salient and raises issues concerning the effective design and management of interfirm ties. Originality/value – Although much of the literature emphasizes the role of bridging ties in connecting to diverse pools of knowledge, this paper goes one step further and investigates in more depth how firms gather and combine distant and heterogeneous sources of knowledge through the use of strengthened bridging ties and a micro-context conducive to high quality relationships.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Organizations are seeking new, integrated systems that enable rapid changes through early identification of opportunities and problems, tracking of progress against plans, flexible allocation of resources to achieve goals, and consistent operations. Total Quality Management (TQM) is an overall business strategy. It means that all activities of the company will be focused on satisfying all stakeholders of the company. TQM can be realised by using the EFQM model. The EFQM model is a tool that organizations may use as a framework for self-evaluation that enables an organization to identify its strengths and areas for improvement and the extent to which its operations and results are in line with the characteristics of an excellent organization. We focus on a training organisation or to the learning department of an organization. So we are limiting the EFQM model to the training /learning activities. We can apply EFQM perfect on the level of an activity (business line) of a company. We selected the main criteria for which the learner can play the role of assessor. So only three main criteria left: the enabling resources, the enabling processes and the (learning) results for the learner. We limited the last one to “learning results” based on the Kirkpatrick model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Using the learning descriptions of graduates of a graduate ministry program, the mechanisms of interactions between the knowledge facets in learning processes were explored and described. The intent of the study was to explore how explicit, implicit, and emancipatory knowledge facets interacted in the learning processes at or about work. The study provided empirical research on Yang's (2003) holistic learning theory. ^ A phenomenological research design was used to explore the essence of knowledge facet interactions. I achieved epoche through the disclosure of assumptions and a written self-experience to bracket biases. A criterion based, stratified sampling strategy was used to identify participants. The sample was stratified by graduation date. The sample consisted of 11 participants and was composed primarily of married (n = 9), white, non-Hispanic (n = 10), females (n = 9), who were Roman Catholic (n = 9). Professionally, the majority of the group were teachers or professors (n = 5). ^ A semi-structured interview guide with scheduled and unscheduled probes was used. Each approximately 1-hour long interview was digitally recorded and transcribed. The transcripts were coded using a priori codes from holistic learning theory and one emergent code. The coded data were analyzed by identifying patterns, similarities, and differences under each code and then between codes. Steps to increase the trustworthiness of the study included member checks, coding checks, and thick descriptions of the data. ^ Five themes were discovered including (a) the difficulty in describing interactions between knowledge facets; (b) actual mechanisms of interactions between knowledge facets; (c) knowledge facets initiating learning and dominating learning processes; (d) the dangers of one-dimensional learning or using only one knowledge facet to learn; and (e) the role of community in learning. The interpretation confirmed, extended, and challenged holistic learning theory. Mechanisms of interaction included knowledge facets expressing, informing, changing, and guiding one another. Implications included the need for a more complex model of learning and the value of seeing spirituality in the learning process. The study raised questions for future research including exploring learning processes with people from non-Christian faith traditions or other academic disciplines and the role of spiritual identity in learning. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work introduces joint power amplifier (PA) and I/Q modulator modelling and compensation for LongTerm Evolution (LTE) transmitters using artificial neural networks (ANNs). The proposed solution util-izes a powerful nonlinear autoregressive with exogenous inputs (NARX) ANN architecture, which yieldsnoticeable results for high peak to average power ratio (PAPR) LTE signals. Given the ANNs learning capa-bilities, this one-step solution, which includes the mitigation of both PA nonlinearity and I/Q modulatorimpairments, is both accurate and adaptable

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nanotechnology has revolutionised humanity's capability in building microscopic systems by manipulating materials on a molecular and atomic scale. Nan-osystems are becoming increasingly smaller and more complex from the chemical perspective which increases the demand for microscopic characterisation techniques. Among others, transmission electron microscopy (TEM) is an indispensable tool that is increasingly used to study the structures of nanosystems down to the molecular and atomic scale. However, despite the effectivity of this tool, it can only provide 2-dimensional projection (shadow) images of the 3D structure, leaving the 3-dimensional information hidden which can lead to incomplete or erroneous characterization. One very promising inspection method is Electron Tomography (ET), which is rapidly becoming an important tool to explore the 3D nano-world. ET provides (sub-)nanometer resolution in all three dimensions of the sample under investigation. However, the fidelity of the ET tomogram that is achieved by current ET reconstruction procedures remains a major challenge. This thesis addresses the assessment and advancement of electron tomographic methods to enable high-fidelity three-dimensional investigations. A quality assessment investigation was conducted to provide a quality quantitative analysis of the main established ET reconstruction algorithms and to study the influence of the experimental conditions on the quality of the reconstructed ET tomogram. Regular shaped nanoparticles were used as a ground-truth for this study. It is concluded that the fidelity of the post-reconstruction quantitative analysis and segmentation is limited, mainly by the fidelity of the reconstructed ET tomogram. This motivates the development of an improved tomographic reconstruction process. In this thesis, a novel ET method was proposed, named dictionary learning electron tomography (DLET). DLET is based on the recent mathematical theorem of compressed sensing (CS) which employs the sparsity of ET tomograms to enable accurate reconstruction from undersampled (S)TEM tilt series. DLET learns the sparsifying transform (dictionary) in an adaptive way and reconstructs the tomogram simultaneously from highly undersampled tilt series. In this method, the sparsity is applied on overlapping image patches favouring local structures. Furthermore, the dictionary is adapted to the specific tomogram instance, thereby favouring better sparsity and consequently higher quality reconstructions. The reconstruction algorithm is based on an alternating procedure that learns the sparsifying dictionary and employs it to remove artifacts and noise in one step, and then restores the tomogram data in the other step. Simulation and real ET experiments of several morphologies are performed with a variety of setups. Reconstruction results validate its efficiency in both noiseless and noisy cases and show that it yields an improved reconstruction quality with fast convergence. The proposed method enables the recovery of high-fidelity information without the need to worry about what sparsifying transform to select or whether the images used strictly follow the pre-conditions of a certain transform (e.g. strictly piecewise constant for Total Variation minimisation). This can also avoid artifacts that can be introduced by specific sparsifying transforms (e.g. the staircase artifacts the may result when using Total Variation minimisation). Moreover, this thesis shows how reliable elementally sensitive tomography using EELS is possible with the aid of both appropriate use of Dual electron energy loss spectroscopy (DualEELS) and the DLET compressed sensing algorithm to make the best use of the limited data volume and signal to noise inherent in core-loss electron energy loss spectroscopy (EELS) from nanoparticles of an industrially important material. Taken together, the results presented in this thesis demonstrates how high-fidelity ET reconstructions can be achieved using a compressed sensing approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Se calculó la obtención de las constantes ópticas usando el método de Wolfe. Dichas contantes: coeficiente de absorción (α), índice de refracción (n) y espesor de una película delgada (d ), son de importancia en el proceso de caracterización óptica del material. Se realizó una comparación del método del Wolfe con el método empleado por R. Swanepoel. Se desarrolló un modelo de programación no lineal con restricciones, de manera que fue posible estimar las constantes ópticas de películas delgadas semiconductoras, a partir únicamente, de datos de transmisión conocidos. Se presentó una solución al modelo de programación no lineal para programación cuadrática. Se demostró la confiabilidad del método propuesto, obteniendo valores de α = 10378.34 cm−1, n = 2.4595, d =989.71 nm y Eg = 1.39 Ev, a través de experimentos numéricos con datos de medidas de transmitancia espectral en películas delgadas de Cu3BiS3.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Biomarkers are nowadays essential tools to be one step ahead for fighting disease, enabling an enhanced focus on disease prevention and on the probability of its occurrence. Research in a multidisciplinary approach has been an important step towards the repeated discovery of new biomarkers. Biomarkers are defined as biochemical measurable indicators of the presence of disease or as indicators for monitoring disease progression. Currently, biomarkers have been used in several domains such as oncology, neurology, cardiovascular, inflammatory and respiratory disease, and several endocrinopathies. Bridging biomarkers in a One Health perspective has been proven useful in almost all of these domains. In oncology, humans and animals are found to be subject to the same environmental and genetic predisposing factors: examples include the existence of mutations in BR-CA1 gene predisposing to breast cancer, both in human and dogs, with increased prevalence in certain dog breeds and human ethnic groups. Also, breast feeding frequency and duration has been related to a decreased risk of breast cancer in women and bitches. When it comes to infectious diseases, this parallelism is prone to be even more important, for as much as 75% of all emerging diseases are believed to be zoonotic. Examples of successful use of biomarkers have been found in several zoonotic diseases such as Ebola, dengue, leptospirosis or West Nile virus infections. Acute Phase Proteins (APPs) have been used for quite some time as biomarkers of inflammatory conditions. These have been used in human health but also in the veterinary field such as in mastitis evaluation and PRRS (porcine respiratory and reproductive syndrome) diagnosis. Advantages rely on the fact that these biomarkers can be much easier to assess than other conventional disease diagnostic approaches (example: measured in easy to collect saliva samples). Another domain in which biomarkers have been essential is food safety: the possibility to measure exposure to chemical contaminants or other biohazards present in the food chain, which are sometimes analytical challenges due to their low bioavailability in body fluids, is nowadays a major breakthrough. Finally, biomarkers are considered the key to provide more personalized therapies, with more efficient outcomes and fewer side effects. This approach is expected to be the correct path to follow also in veterinary medicine, in the near future.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Machine Learning makes computers capable of performing tasks typically requiring human intelligence. A domain where it is having a considerable impact is the life sciences, allowing to devise new biological analysis protocols, develop patients’ treatments efficiently and faster, and reduce healthcare costs. This Thesis work presents new Machine Learning methods and pipelines for the life sciences focusing on the unsupervised field. At a methodological level, two methods are presented. The first is an “Ab Initio Local Principal Path” and it is a revised and improved version of a pre-existing algorithm in the manifold learning realm. The second contribution is an improvement over the Import Vector Domain Description (one-class learning) through the Kullback-Leibler divergence. It hybridizes kernel methods to Deep Learning obtaining a scalable solution, an improved probabilistic model, and state-of-the-art performances. Both methods are tested through several experiments, with a central focus on their relevance in life sciences. Results show that they improve the performances achieved by their previous versions. At the applicative level, two pipelines are presented. The first one is for the analysis of RNA-Seq datasets, both transcriptomic and single-cell data, and is aimed at identifying genes that may be involved in biological processes (e.g., the transition of tissues from normal to cancer). In this project, an R package is released on CRAN to make the pipeline accessible to the bioinformatic Community through high-level APIs. The second pipeline is in the drug discovery domain and is useful for identifying druggable pockets, namely regions of a protein with a high probability of accepting a small molecule (a drug). Both these pipelines achieve remarkable results. Lastly, a detour application is developed to identify the strengths/limitations of the “Principal Path” algorithm by analyzing Convolutional Neural Networks induced vector spaces. This application is conducted in the music and visual arts domains.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim was to describe the outcome of neonatal hearing screening (NHS) and audiological diagnosis in neonates in the NICU. The sample was divided into Group I: neonates who underwent NHS in one step and Group II: neonates who underwent a test and retest NHS. NHS procedure was automated auditory brainstem response. NHS was performed in 82.1% of surviving neonates. For GI, referral rate was 18.6% and false-positive was 62.2% (normal hearing in the diagnostic stage). In GII, with retest, referral rate dropped to 4.1% and false-positive to 12.5%. Sensorineural hearing loss was found in 13.2% of infants and conductive in 26.4% of cases. There was one case of auditory neuropathy spectrum (1.9%). Dropout rate in whole process was 21.7% for GI and 24.03% for GII. We concluded that it was not possible to perform universal NHS in the studied sample or, in many cases, to apply it within the first month of life. Retest reduced failure and false-positive rate and did not increase evasion, indicating that it is a recommendable step in NHS programs in the NICU. The incidence of hearing loss was 2.9%, considering sensorineural hearing loss (0.91%), conductive (1.83%) and auditory neuropathy spectrum (0.19%).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE: To assess microleakage in conservative class V cavities prepared with aluminum-oxide air abrasion or turbine and restored with self-etching or etch-and-rinse adhesive systems. Materials and Methods: Forty premolars were randomly assigned to 4 groups (I and II: air abrasion; III and IV: turbine) and class V cavities were prepared on the buccal surfaces. Conditioning approaches were: groups I/III - 37% phosphoric acid; groups II/IV - self-priming etchant (Tyrian-SPE). Cavities were restored with One Step Plus/Filtek Z250. After finishing, specimens were thermocycled, immersed in 50% silver nitrate, and serially sectioned. Microleakage at the occlusal and cervical interfaces was measured in mm and calculated by a software. Data were subjected to ANOVA and Tukey's test (α=0.05). RESULTS: Marginal seal provided by air abrasion was similar to high-speed handpiece, except for group I. There was SIGNIFICANT difference between enamel and dentin/cementum margins for to group I and II: air abrasion. The etch-and-rinse adhesive system promoted a better marginal seal. At enamel and dentin/cementum margins, the highest microleakage values were found in cavities treated with the self-etching adhesive system. At dentin/cementum margins, high-speed handpiece preparations associated with etch-and-rinse system provided the least dye penetration. CONCLUSION: Marginal seal of cavities prepared with aluminum-oxide air abrasion was different from that of conventionally prepared cavities, and the etch-and-rinse system promoted higher marginal seal at both enamel and dentin margins.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of the current study was to understand how visual information about an ongoing change in obstacle size is used during obstacle avoidance for both lead and trail limbs. Participants were required to walk in a dark room and to step over an obstacle edged with a special tape visible in the dark. The obstacle's dimensions were manipulated one step before obstacle clearance by increasing or decreasing its size. Two increasing and two decreasing obstacle conditions were combined with seven control static conditions. Results showed that information about the obstacle's size was acquired and used to modulate trail limb trajectory, but had no effect on lead limb trajectory. The adaptive step was influenced by the time available to acquire and process visual information. In conclusion, visual information about obstacle size acquired during lead limb crossing was used in a feedforward manner to modulate trail limb trajectory.