464 resultados para Toy premiums
Resumo:
A Bayesian optimisation algorithm for a nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. When a human scheduler works, he normally builds a schedule systematically following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not yet completed, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this paper, we design a more human-like scheduling algorithm, by using a Bayesian optimisation algorithm to implement explicit learning from past solutions. A nurse scheduling problem from a UK hospital is used for testing. Unlike our previous work that used Genetic Algorithms to implement implicit learning [1], the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The Bayesian optimisation algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, new rule strings have been obtained. Sets of rule strings are generated in this way, some of which will replace previous strings based on fitness. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. For clarity, consider the following toy example of scheduling five nurses with two rules (1: random allocation, 2: allocate nurse to low-cost shifts). In the beginning of the search, the probabilities of choosing rule 1 or 2 for each nurse is equal, i.e. 50%. After a few iterations, due to the selection pressure and reinforcement learning, we experience two solution pathways: Because pure low-cost or random allocation produces low quality solutions, either rule 1 is used for the first 2-3 nurses and rule 2 on remainder or vice versa. In essence, Bayesian network learns 'use rule 2 after 2-3x using rule 1' or vice versa. It should be noted that for our and most other scheduling problems, the structure of the network model is known and all variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus, learning can amount to 'counting' in the case of multinomial distributions. For our problem, we use our rules: Random, Cheapest Cost, Best Cover and Balance of Cost and Cover. In more detail, the steps of our Bayesian optimisation algorithm for nurse scheduling are: 1. Set t = 0, and generate an initial population P(0) at random; 2. Use roulette-wheel selection to choose a set of promising rule strings S(t) from P(t); 3. Compute conditional probabilities of each node according to this set of promising solutions; 4. Assign each nurse using roulette-wheel selection based on the rules' conditional probabilities. A set of new rule strings O(t) will be generated in this way; 5. Create a new population P(t+1) by replacing some rule strings from P(t) with O(t), and set t = t+1; 6. If the termination conditions are not met (we use 2000 generations), go to step 2. Computational results from 52 real data instances demonstrate the success of this approach. They also suggest that the learning mechanism in the proposed approach might be suitable for other scheduling problems. Another direction for further research is to see if there is a good constructing sequence for individual data instances, given a fixed nurse scheduling order. If so, the good patterns could be recognized and then extracted as new domain knowledge. Thus, by using this extracted knowledge, we can assign specific rules to the corresponding nurses beforehand, and only schedule the remaining nurses with all available rules, making it possible to reduce the solution space. Acknowledgements The work was funded by the UK Government's major funding agency, Engineering and Physical Sciences Research Council (EPSRC), under grand GR/R92899/01. References [1] Aickelin U, "An Indirect Genetic Algorithm for Set Covering Problems", Journal of the Operational Research Society, 53(10): 1118-1126,
Resumo:
Tese (doutorado)Universidade de Brasília, Instituto de Física, Programa de Pós-Graduação em Física, 2015.
Resumo:
Dissertação de Mestrado apresentada no Instituto Superior de Psicologia Aplicada para obtenção de grau de Mestre na especialidade de Psicologia Clínica
Resumo:
In the Rio Grande do Norte, the craftsmanship is generating of economy, it involves a significant number of people and is diversified in its raw materials and particular type. As reference of the craft local, the ceramics supplied the primary necessities in the utilitarian domestic servants, acquired piety in the religious figures, were toy in infantile amusements and, finally, gained status of pure ornament. By its historical representation, the district of Santo Antônio do Potengi is considered the most important center of manufacture of craftpottery in the State. The work of the potters continues in that locality anchored between the familiar inheritance and the participation each more influential time of the public politics destined to the sector situation verified for visible alterations in the shape of the pottery from the decade of 1990 with the implantation of a cooperative destined to the collective production. We observe in this passage, that such actions in the measure where they objectify to structuralize conditions ideal to support the artisan making, do not benefit in significant way the social development them craftsmen. It is important not to lose of sight that exists some involved dimensions in this process and that these surpass the common interest for the object and the consequent economic connotation of its commercialization. They are knowledge that imply in the access to raw materials, in the peculiar of the formal aspects and productive methods, in the contextual relations organized to defend the survival of the activity
Resumo:
Background: The latest national census reports the population of Iranian children (1 - 8 years old) about 11 millions. On the other hand, the latest population policies approved by supreme cultural revolution council (SCRC) will make this population increase faster. Childhood development is one of the social determinants of health, of which “child’s play” is a part. Objectives: This study is an effort to identify difficulties and challenges of the plays influential on Iranian children’s health nationwide, in order to present enhancive strategies by utilizing the views of stakeholders and national studies. Patients and Methods: Analyzing children’s play stakeholders, main organizations were identified and views of 13 informed people involved in the field were investigated through deep semi-structured interview. A denaturalized approach was employed in analyzing the data. In addition to descriptions of the state, interventions development, and designing the conceptual model, national reports and studies, and other countries’ experiences were also reviewed. Results: Society’s little knowledge of “children’s plays”, absence of administrators for children’s play, shortage of public facilities for children’s play and improper geographical and demographic availability, absence of policies for Iranian “toy”, and little attention of media to the issue are the five major problems as stated by interviewees. Conclusions: The proposed interventions are presented as “promoting the educational levels of parents and selected administrators for children’s play”, “approving the play and toy policy for Iran 2025”, and “increasing public facilities for children’s play with defined distribution and availability”.
Resumo:
What qualities, skills, and knowledge produce quality teachers? Many stake-holders in education argue that teacher quality should be measured by student achievement. This qualitative study shows that good teachers are multi-dimensional; their effectiveness cannot be represented by students’ test scores alone. The purpose of this phenomenological study was to gain a deeper understanding of quality in teaching by examining the lived experiences of 10 winners or finalists of the Teacher of the Year (ToY) Award. Phenomenology describes individuals’ daily experiences of phenomena, examines how these experiences are structured, and focuses analysis on the perspectives of the persons having the experience (Moustakas, 1994). This inquiry asked two questions: (a) How is teaching experienced by recognized as outstanding Teachers of the Year? and (b) How do ToYs feelings and perceptions about being good teachers provide insight, if any, about concepts such as pedagogical tact, teacher selfhood, and professional dispositions? Ten participants formed the purposive sample; the major data collection tool was semi-structured interviews (Patton, 1990; Seidman, 2006). Sixty to 90-minute interviews were conducted with each participant. Data also included the participants’ ToY application essays. Data analysis included a three-phase process: description, reduction, interpretation. Findings revealed that the ToYs are dedicated, hard-working individuals. They exhibit behaviors, such as working beyond the school day, engaging in lifelong learning, and assisting colleagues to improve their practice. Working as teachers is their life’s compass, guiding and wrapping them into meaningful and purposeful lives. Pedagogical tact, teacher selfhood, and professional dispositions were shown to be relevant, offering important insights into good teaching. Results indicate that for these ToYs, good teaching is experienced by getting through to students using effective and moral means; they are emotionally open, have a sense of the sacred, and they operate from a sense of intentionality. The essence of the ToYs teaching experience was their being properly engaged in their craft, embodying logical, psychological, and moral realms. Findings challenge current teacher effectiveness process-product orthodoxy which makes a causal connection between effective teaching and student test scores, and which assumes that effective teaching arises solely from and because of the actions of the teacher.
Resumo:
A plethora of recent literature on asset pricing provides plenty of empirical evidence on the importance of liquidity, governance and adverse selection of equity on pricing of assets together with more traditional factors such as market beta and the Fama-French factors. However, literature has usually stressed that these factors are priced individually. In this dissertation we argue that these factors may be related to each other, hence not only individual but also joint tests of their significance is called for. In the three related essays, we examine the liquidity premium in the context of the finer three-digit SIC industry classification, joint importance of liquidity and governance factors as well as governance and adverse selection. Recent studies by Core, Guay and Rusticus (2006) and Ben-Rephael, Kadan and Wohl (2010) find that governance and liquidity premiums are dwindling in the last few years. One reason could be that liquidity is very unevenly distributed across industries. This could affect the interpretation of prior liquidity studies. Thus, in the first chapter we analyze the relation of industry clustering and liquidity risk following a finer industry classification suggested by Johnson, Moorman and Sorescu (2009). In the second chapter, we examine the dwindling influence of the governance factor if taken simultaneously with liquidity. We argue that this happens since governance characteristics are potentially a proxy for information asymmetry that may be better captured by market liquidity of a company’s shares. Hence, we jointly examine both the factors, namely, governance and liquidity – in a series of standard asset pricing tests. Our results reconfirm the importance of governance and liquidity in explaining stock returns thus independently corroborating the findings of Amihud (2002) and Gompers, Ishii and Metrick (2003). Moreover, governance is not subsumed by liquidity. Lastly, we analyze the relation of governance and adverse selection, and again corroborate previous findings of a priced governance factor. Furthermore, we ascertain the importance of microstructure measures in asset pricing by employing Huang and Stoll’s (1997) method to extract an adverse selection variable and finding evidence for its explanatory power in four-factor regressions.
Resumo:
Technological Education is a subject where students acquire knowledge and technical skills, which will enable them to analyse and resolve specific situations and will prepare them for an increasingly technological world. This course requires students to gain knowledge and know-how such that motivation and commitment are crucial for the development of classroom projects and activities. It is in this context that traditional toys come up in this study as catalysts for motivation and student interest. Thus, the aim of the research performed is to understand whether the units of work related to traditional toys promote the students’ motivation and commitment on Technological Education. In terms of methodology, we carried out an exploratory research of qualitative nature, based on semi-structured interviews with teachers and students in the 2nd cycle of basic education at five schools in the municipality of Viseu, Portugal. Nine teachers and forty-five Technological Education students, aged between 10 and 12 years, attending the 5th and 6th years of schooling participated. Content analysis of the answers revealed that the implementation of units of work involving the construction of traditional toys are conducive to students’ motivation and commitment, constituting an added value in Technological Education. As this is a classroom project, it allows students to apply technical knowledge they have acquired. Thus, starting from a first idea, it allows them to experience all of the stages of toy building, from conception to completion, contributing to greater student satisfaction in the teaching-learning process.
Resumo:
Problem Statement: This research aims to understand the contribution of traditional toys as catalysts for motivation and student commitment in the development of Technological Education projects and activities. Research Questions: To what extent do work units related to traditional toys promote student motivation and commitment in the subject of Technological Education. Purpose of Study: Technological Education requires students to gain knowledge and know-how such that motivation and commitment are crucial for the development of classroom projects and activities. It is in this context that traditional toys are assumed to be catalysts for motivation and student interest. Research Methods: In terms of methodology, an exploratory research of a qualitative nature was carried out, based on semi-structured interviews to teachers and students within a 2nd cycle of Basic Education environment, encompassing five state schools in the Viseu municipality, Portugal. Nine teachers and forty-five technological education pupils, aged between 10 and 12, attending the 5th and 6th years of schooling participated. Findings: Content analysis of the answers revealed that the implementation of work units involving the construction of traditional toys are conducive to student motivation and commitment. Starting off with an initial idea, pupils are enabled to experience all the stages of toy building, from conception to completion, contributing to greater student satisfaction in the teaching-learning process. Conclusions: The traditional toys constitute an added value in the subject of Technological Education, promoting student motivation and commitment in the development of projects and activities. Students acquire knowledge and skills, which will enable them to analyze and thus resolve specific situations and prepare them for an increasingly technological world.
Resumo:
The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.
Resumo:
Esta investigación busca desde la logística inversa, mostrar como un producto tan implementado y desechado por la sociedad, puede generar un nuevo uso, e incluso la reintegración total de sus materias primas a la cadena de suministro, mostrando la estrecha relación que existe entre la logística inversa y la reutilización de productos fuera de uso. Para esto se analizara el manejo actual que tienen las llantas en Bogotá, planteando su flujo y dando a conocer el principal punto de falencia, que es la recolección y los diferentes sitios de acopio de estas. Los neumáticos o llantas son desechados anualmente en Bogotá sin medir las consecuencias ambientales que esto trae consigo, pues las quemas a cielo abierto de estos materiales y su almacenamiento inadecuado generan altos riesgos para su entorno y el medio ambiente. Además, el manejo inapropiado de llantas es una de las principales razones por la cual pasan a ser obsoletas al poco tiempo de uso. El proceso de fabricación de llantas es muy similar al proceso de cualquier producto, pues en resumen este cuenta con la implementación de sus materias primas, proceso de manufactura, una inspección final y como resultado un producto terminado, que al ser vendido muchas compañías se libran de la disposición final de las llantas. Pero ahí radica un punto a favor para la logística inversa, donde esta buscara la manera de darle un nuevo ciclo de vida a este producto, a través del reciclaje y la reutilización. Mediante esta investigación, se busca captar los principales lugares de acopio de llantas usadas en Bogotá, de manera que estos serán nuestros principales puntos de información para el proyecto, que permita plantear y definir de manera clara estrategias y conclusiones cualitativas.
Resumo:
A partir de la dinámica evolutiva de la economía de las Tecnologías de la Información y las Comunicaciones y el establecimiento de estándares mínimos de velocidad en distintos contextos regulatorios a nivel mundial, en particular en Colombia, en el presente artículo se presentan diversas aproximaciones empíricas para evaluar los efectos reales que conlleva el establecimiento de definiciones de servicios de banda ancha en el mercado de Internet fijo. Con base en los datos disponibles para Colombia sobre los planes de servicios de Internet fijo ofrecidos durante el periodo 2006-2012, se estima para los segmentos residencial y corporativo el proceso de difusión logístico modificado y el modelo de interacción estratégica para identificar los impactos generados sobre la masificación del servicio a nivel municipal y sobre las decisiones estratégicas que adoptan los operadores, respectivamente. Respecto a los resultados, se encuentra, por una parte, que las dos medidas regulatorias establecidas en Colombia en 2008 y 2010 presentan efectos significativos y positivos sobre el desplazamiento y el crecimiento de los procesos de difusión a nivel municipal. Por otra parte, se observa sustituibilidad estratégica en las decisiones de oferta de velocidad de descarga por parte de los operadores corporativos mientras que, a partir del análisis de distanciamiento de la velocidad ofrecida respecto al estándar mínimo de banda ancha, se demuestra que los proveedores de servicios residenciales tienden a agrupar sus decisiones de velocidad alrededor de los niveles establecidos por regulación.
Resumo:
Introducción: la colecistectomía laparoscópica es la técnica de elección en pacientes con indicación de extracción quirúrgica de la vesícula; sin embargo, en promedio 20% de éstos requieren conversión a técnica abierta. En este estudio se evaluaron los factores de riesgo preoperatorios para conversión en colecistectomía laparoscópica de urgencia. Metodología: se realizó un estudio de casos y controles no pareado. Se obtuvo información sociodemográfica y de variables de interés de los registros de historias clínicas de pacientes operados entre el 2013 y 2016. Se identificaron los motivos de conversión de técnica quirúrgica. Se caracterizó la población de estudio y se estimaron asociaciones según la naturaleza de las variables. Mediante un análisis de regresión logística se ajustaron posibles variables de confusión. Resultados: se analizaron los datos de 444 pacientes (111 casos y 333 controles). La causa de conversión más frecuente fue la dificultad técnica (50,5%). Se encontró que la mayor edad, el sexo masculino, el antecedente de cirugía abierta en hemiabdomen superior, el signo de Murphy clínico positivo, la dilatación de la vía biliar, la leucocitosis y la mayor experiencia del cirujano, fueron factores de riesgo para conversión. Se encontró un área bajo la curva ROC= 0,743 (IC95% 0,692–0,794, p= <0,001). Discusión: existen unos factores que se asocian a mayor riesgo de conversión en colecistectomía laparoscópica. La mayoría se relacionan con un proceso inflamatorio más severo, por lo que se debe evitar la prolongación del tiempo de espera entre el inicio de los síntomas y la extirpación quirúrgica de la vesícula.
Resumo:
Este trabajo tiene como objetivo describir y comparar el nivel y las diferencias de las competencias emocionales de niños que asisten a ludotecas y que no asisten. Este estudio se enmarca dentro de las investigaciones cuantitativas, con diseño no experimental descriptivo-comparativo. Para este, se tomó una muestra de 540 niños colombianos, de los cuales 249 eran niños y 287 eran niñas. Se utilizó una “Guía de Instrumentación”, la cual se divide en dos rejillas de observación, una en situación natural y la otra en situación artificial. Los resultados mostraron diferencias significativas para el componente de expresión de emociones y empatía, pero no se encontraron diferencias significativas para el componente de autorregulación. En líneas generales, esto demostraría que efectivamente las ludotecas sí tienen un efecto sobre las competencias emocionales, más específicamente en la expresión de emociones y la empatía, esto se pude dar, debido a que estos dos componentes se pueden desarrollar por medio de las interacciones sociales tempranas y el juego social; en cambio el desarrollo de la autorregulación depende más de factores personales como el temperamento y familiares como la relación madre-hijo. Por último, se requiere más estudios que den más evidencia de la relación entre las competencias emocionales y las ludotecas, de este modo, lograr que las ludotecas y las competencias emocionales sean vistas como factores claves en el desarrollo de los niños y niñas.