941 resultados para Characteristic Initial Value Problem
Resumo:
The value premium is well established in empirical asset pricing, but to date there is little understanding as to its fundamental drivers. We use a stochastic earnings valuation model to establish a direct link between the volatility of future earnings growth and firm value. We illustrate that risky earnings growth affects growth and value firms differently. We provide empirical evidence that the volatility of future earnings growth is a significant determinant of the value premium. Using data on individual firms and characteristic-sorted test portfolios, we also find that earnings growth volatility is significant in explaining the cross-sectional variation of stock returns. Our findings imply that the value premium is the rational consequence of accounting for risky earnings growth in the firm valuation process.
Resumo:
O projeto desenvolvido tem como objetivo principal a melhoria da eficiência na prestação de serviços de reparação de chapa e pintura na Caetano Auto Colisão, através da aplicação de ferramentas associadas à filosofia Lean. Apesar das ferramentas e técnicas lean estarem bem exploradas nas empresas de produção e manufatura, o mesmo não se verifica em relação às empresas da área dos serviços. O Value Stream Mapping é uma ferramenta lean que consiste no mapeamento do fluxo de materiais e informação necessários para a realização das atividades (que acrescentam e não acrescentam valor), desempenhadas pelos colaboradores, fornecedores e distribuidores, desde a obtenção do pedido do cliente até à entrega final do serviço. Através desta ferramenta é possível identificar as atividades que não acrescentam valor para o processo e propor medidas de melhoria que resultem na eliminação ou redução das mesmas. Com base neste conceito, foi realizado o mapeamento do processo de prestação de serviços de chapa e pintura e identificados os focos de ineficiência. A partir desta análise foram sugeridas melhorias que têm como objetivo atingir o estado futuro proposto assim como tornar o processo mais eficiente. Duas destas melhorias passaram pela implementação dos 5S na sala das tintas e pela elaboração de um relatório A3 para o centro de lavagens. O projeto realizado permitiu o estudo de um problema real numa empresa de serviços, bem como a proposta de um conjunto de melhorias que a médio prazo se espera virem a contribuir para a melhoria da eficiência na prestação de serviços de chapa e pintura.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
In recent years, the luxury market has entered a period of very modest growth, which has been dubbed the ‘new normal’, where varying tourist flows, currency fluctuations, and shifted consumer tastes dictate the terms. The modern luxury consumer is a fickle mistress. Especially millennials – people born in the 1980s and 1990s – are the embodiment of this new form of demanding luxury consumer with particular tastes and values. Modern consumers, and specifically millennials, want experiences and free time, and are interested in a brand’s societal position and environmental impact. The purpose of this thesis is to investigate what the luxury value perceptions of millennials in higher education are in Europe, seeing as many of the most prominent luxury goods companies in the world originate from Europe. Perceived luxury value is herein examined from the individual’s perspective. As values and value perceptions are complex constructs, using qualitative research methods is justifiable. The data for thesis has been gathered by means of a group interview. The interview participants all study hospitality management in a private college, and each represent a different nationality. Cultural theories and research on luxury and luxury values provide the scientific foundation for this thesis, and a multidimensional luxury value model is used as a theoretical tool in sorting and analyzing the data. The results show that millennials in Europe value much more than simply modern and hard luxury. Functional, financial, individual, and social aspects are all present in perceived luxury value, but some more in a negative sense than others. Conspicuous, status-seeking consumption is mostly frowned upon, as is the consumption of luxury goods for the sake of satisfying social requisites and peer pressure. Most of the positive value perceptions are attributed to the functional dimension, as luxury products are seen to come with a promise of high quality and reliability, which justifies any price premiums. Ecological and ethical aspects of luxury are already a contemporary trend, but perceived even more as an important characteristic of luxury in the future. Most importantly, having time is fundamental. Depending on who is asked, luxury can mean anything, just as much as it can mean nothing.
Resumo:
Abstract- A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. Unlike our previous work that used GAs to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
Resumo:
Australian forest industries have a long history of export trade of a wide range of products from woodchips (for paper manufacturing), sandalwood (essential oils, carving and incense) to high value musical instruments, flooring and outdoor furniture. For the high value group, fluctuating environmental conditions brought on by changes in temperature and relative humidity, can lead to performance problems due to consequential swelling, shrinkage and/or distortion of the wood elements. A survey determined the types of value-added products exported, including species and dimensions packaging used and export markets. Data loggers were installed with shipments to monitor temperature and relative humidity conditions. These data were converted to timber equilibrium moisture content values to provide an indication of the environment that the wood elements would be acclimatising to. The results of the initial survey indicated that primary high value wood export products included guitars, flooring, decking and outdoor furniture. The destination markets were mainly located in the northern hemisphere, particularly the United States of America, China, Hong Kong, Europe (including the United Kingdom), Japan, Korea and the Middle East. Other regions importing Australian-made wooden articles were south-east Asia, New Zealand and South Africa. Different timber species have differing rates of swelling and shrinkage, so the types of timber were also recorded during the survey. Results from this work determined that the major species were ash-type eucalypts from south-eastern Australia (commonly referred to in the market as Tasmanian oak), jarrah from Western Australia, spotted gum, hoop pine, white cypress, black butt, brush box and Sydney blue gum from Queensland and New South Wales. The environmental conditions data indicated that microclimates in shipping containers can fluctuate extensively during shipping. Conditions at the time of manufacturing were usually between 10 and 12% equilibrium moisture content, however conditions during shipping could range from 5 (very dry) to 20% (very humid). The packaging systems incorporated were reported to be efficient at protecting the wooden articles from damage during transit. The research highlighted the potential risk for wood components to ‘move’ in response to periods of drier or more humid conditions than those at the time of manufacturing, and the importance of engineering a packaging system that can account for the environmental conditions experienced in shipping containers. Examples of potential dimensional changes in wooden components were calculated based on published unit shrinkage data for key species and the climatic data returned from the logging equipment. The information highlighted the importance of good design to account for possible timber movement during shipping. A timber movement calculator was developed to allow designers to input component species, dimensions, site of manufacture and destination, to see validate their product design.
Resumo:
Abstract: This paper reports a lot-sizing and scheduling problem, which minimizes inventory and backlog costs on m parallel machines with sequence-dependent set-up times over t periods. Problem solutions are represented as product subsets ordered and/or unordered for each machine m at each period t. The optimal lot sizes are determined applying a linear program. A genetic algorithm searches either over ordered or over unordered subsets (which are implicitly ordered using a fast ATSP-type heuristic) to identify an overall optimal solution. Initial computational results are presented, comparing the speed and solution quality of the ordered and unordered genetic algorithm approaches.
Resumo:
Abstract- A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. Unlike our previous work that used GAs to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
Resumo:
Synthetic cannabinoid receptor agonists or more commonly known as synthetic cannabinoids (SCs) were originally created to obtain the medicinal value of THC but they are an emerging social problem. SCs are mostly produced coated on herbal materials or in powder form and marketed under a variety of brand names, e.g. “Spice”, “K2”. Despite many SCs becoming controlled under drug legislation, many of them remain legal in some countries around the world. In Scotland, SCs are controlled under the Misuse of Drugs Act 1971 and Psychoactive Substances Act 2016 that only cover a few early SCs. In Saudi Arabia, even fewer are controlled. The picture of the SCs-problem in Scotland is vague due to insufficient prevalence data, particularly that using biological samples. Whilst there is evidence of increasing use of SCs throughout the world, in Saudi Arabia, there is currently no data regarding the use of products containing SCs among Saudi people. Several studies indicate that SCs may cause serious toxicity and impairment to health therefore it is important to understand the scale of use within society. A simple and sensitive method was developed for the simultaneous analysis of 10 parent SCs (JWH-018, JWH-073, JWH-250, JWH-200, AM-1248, UR-144, A-796260, AB-FUBINACA, 5F-AKB-48 and 5F-PB-22) in whole blood and 8 corresponding metabolites (JWH-018 4-OH pentyl, JWH-073 3-OH butyl, JWH-250 4-OH pentyl, AM-2201 4-OH pentyl, JWH-122 5-OH pentyl, JWH-210 5-OH pentyl, 5F-AKB-48 (N-4 OH pentyl), 5F-PB-22 3-carboxyindole)in urine using LLE and LC-MS/MS. The method was validated according to the standard practices for method validation in forensic toxicology (SWGTOX, May 2013). All analytes gave acceptable precision, linearity and recovery for analysing blood and urine samples. The method was applied to 1,496 biological samples, a mixture of whole blood and urine. Blood and/or urine samples were analysed from 114 patients presenting at Accident and Emergency in Glasgow Royal Infirmary, in spring 2014 and JuneDecember 2015. 5F-AKB-48, 5F-PB-22 and MDMB-CHMICA were detected in 9, 7 and 9 cases respectively. 904 urine samples from individuals admitted to/liberated from Scottish prisons over November 2013 were tested for the presence of SCs. 5F-AKB-48 (N-4 OH pentyl) was detected in 10 cases and 5F-PB-22 3-carboxyindole in 3 cases. Blood and urine samples from two post-mortem cases in Scotland with suspected ingestion of SCs were analysed. Both cases were confirmed positive for 5F-AKB-48. A total of 463 urine samples were collected from personnel who presented to the Security Forces Hospital in Ryiadh for workplace drug testing as a requirement for their job during July 2014. The results of the analysis found 2 samples to be positive for 5F-PB-22 3carboxyindole. A further study in Saudi Arabia using a questionnaire was carried out among 3 subpopulations: medical professionals, members of the public in and around smoking cafes and known drug users. With regards to general awareness of Spice products, 16%, 11% and 22% of those participants of medical professionals, members of the public in and around smoking cafes and known drug users, respectively, were aware of the existence of SCs or Spice products. The respondents had an overall average of 4.5% who had a friend who used these Spice products. It is clear from the results obtained in both blood and urine testing and surveys that SCs are being used in both Scotland and Saudi Arabia. The extent of their use is not clear and the data presented here is an initial look into their prevalence. Blood and urine findings suggest changing trends in SC use, moving away from JWH and AM SCs to the newer 5F-AKB-48, 5-F-PB-22 and MDMBCHMICA compounds worldwide. In both countries 5F-PB-22 was detected. These findings clarify how the SCs phenomenon is a worldwide problem and how the information of every country regarding what SCs are seized can help and is not specific for that country. The analytes included in the method were selected due to their apparent availability in both countries, however it is possible that some newer analytes have been used and these would not have been detected. For this reason it is important that methods for testing SCs are updated regularly and evolve with the ever-changing availability of these drugs worldwide. In addition, there is little published literature regarding the concentrations of these drugs found in blood and urine samples and this work goes some way towards understanding these.
Resumo:
This work is concerned with the design and analysis of hp-version discontinuous Galerkin (DG) finite element methods for boundary-value problems involving the biharmonic operator. The first part extends the unified approach of Arnold, Brezzi, Cockburn & Marini (SIAM J. Numer. Anal. 39, 5 (2001/02), 1749-1779) developed for the Poisson problem, to the design of DG methods via an appropriate choice of numerical flux functions for fourth order problems; as an example we retrieve the interior penalty DG method developed by Suli & Mozolevski (Comput. Methods Appl. Mech. Engrg. 196, 13-16 (2007), 1851-1863). The second part of this work is concerned with a new a-priori error analysis of the hp-version interior penalty DG method, when the error is measured in terms of both the energy-norm and L2-norm, as well certain linear functionals of the solution, for elemental polynomial degrees $p\ge 2$. Also, provided that the solution is piecewise analytic in an open neighbourhood of each element, exponential convergence is also proven for the p-version of the DG method. The sharpness of the theoretical developments is illustrated by numerical experiments.
Resumo:
The present study examined the correlations between motivational orientation and students’ academic performance in mathematical problem solving and reading comprehension. The main purpose is to see if students’ intrinsic motivation is related to their actual performance in different subject areas, math and reading. In addition, two different informants, students and teachers, were adopted to check whether the correlation is different by different informants. Pearson’s correlational analysis was a major method, coupled with regression analysis. The result confirmed the significant positive correlation between students’ academic performance and students’ self-report and teacher evaluation on their motivational orientation respectively. Teacher evaluation turned out with more predictive value for the academic achievement in math and reading. Between the subjects, mathematical problem solving showed higher correlation with most of the motivational subscales than reading comprehension did. The highest correlation was found between teacher evaluation on task orientation and students’ mathematical problem solving. The positive relationship between intrinsic motivation and academic achievement was proved. The disparity between students ’ self-report and teacher evaluation on motivational orientation was also addressed with the need of further examination.
Resumo:
Cassini states correspond to the equilibria of the spin axis of a body when its orbit is perturbed. They were initially described for planetary satellites, but the spin axes of black hole binaries also present this kind of equilibria. In previous works, Cassini states were reported as spin-orbit resonances, but actually the spin of black hole binaries is in circulation and there is no resonant motion. Here we provide a general description of the spin dynamics of black hole binary systems based on a Hamiltonian formalism. In absence of dissipation, the problem is integrable and it is easy to identify all possible trajectories for the spin for a given value of the total angular momentum. As the system collapses due to radiation reaction, the Cassini states are shifted to different positions, which modifies the dynamics around them. This is why the final spin distribution may differ from the initial one. Our method provides a simple way of predicting the distribution of the spin of black hole binaries at the end of the inspiral phase.
Resumo:
Increasing the amount of detergent industries in world in spite of having abundant benefits; entering a new kind of contamination into environment and attract the attention of environment liable of different countries to itself. Entering detergents into an aqueous solution cause pollution of water sources and environment in respect of appearing e problem and charges like: nutritive phenomenon, decomposition of hard group of detergent and producing foam. After using Detergents, they were poured into rivers, seas and lakes and have destructive effect on environment. A lot of hygiene problems were attributed to the water having detergents more than allowed value. So, it is specified the importance of eliminating detergents from contaminated water and it is application for secondary use. In order to attain to this aim, we can use inorganic nano and micro-caolin. In this study the adsorptive properties of detergent on the micro and nano caolin adsorbents were studied and the effect of various parameters like the amount of adsorptive materials, initial concentration of detergent, speed of stirring, electrolyte, temperature, time and pH were determined. The surface area of micro- and nano-caoline was reported 11.867 and 49.1438 m2 g-1, respectively. That increasing in nano-caoline surface area confirms increasing in capacity and more rate of adsorption. The results gained by this research recommend using micro- and nano-caolin as a plentiful, available and effective adsorbents. Also in comparison, using nano-caoline was recommended in order to have more effectiveness.
Resumo:
The aim of this study is to investigate the effectiveness of problem-based learning (PBL) on students’ mathematical performance. This includes mathematics achievement and students’ attitudes towards mathematics for third and eighth grade students in Saudi Arabia. Mathematics achievement includes, knowing, applying, and reasoning domains, while students’ attitudes towards mathematics covers, ‘Like learning mathematics’, ‘value mathematics’, and ‘a confidence to learn mathematics’. This study goes deeper to examine the interaction of a PBL teaching strategy, with trained face-to-face and self-directed learning teachers, on students’ performance (mathematics achievement and attitudes towards mathematics). It also examines the interaction between different ability levels of students (high and low levels) with a PBL teaching strategy (with trained face-to-face or self-directed learning teachers) on students’ performance. It draws upon findings and techniques of the TIMSS international benchmarking studies. Mixed methods are used to analyse the quasi-experimental study data. One -way ANOVA, Mixed ANOVA, and paired t-tests models are used to analyse quantitative data, while a semi-structured interview with teachers, and author’s observations are used to enrich understanding of PBL and mathematical performance. The findings show that the PBL teaching strategy significantly improves students’ knowledge application, and is better than the traditional teaching methods among third grade students. This improvement, however, occurred only with the trained face-to-face teacher’s group. Furthermore, there is robust evidence that using a PBL teaching strategy could raise significantly students’ liking of learning mathematics, and confidence to learn mathematics, more than traditional teaching methods among third grade students. Howe ver, there was no evidence that PBL could improve students’ performance (mathematics achievement and attitudes towards mathematics), more than traditional teaching methods, among eighth grade students. In 8th grade, the findings for low achieving students show significant improvement compared to high achieving students, whether PBL is applied or not. However, for 3th grade students, no significant difference in mathematical achievement between high and low achieving students was found. The results were not expected for high achieving students and this is also discussed. The implications of these findings for mathematics education in Saudi Arabia are considered.
Resumo:
PURPOSE We aimed to evaluate the added value of diffusion-weighted imaging (DWI) to standard magnetic resonance imaging (MRI) for detecting post-treatment cervical cancer recurrence. The detection accuracy of T2-weighted (T2W) images was compared with that of T2W MRI combined with either dynamic contrast-enhanced (DCE) MRI or DWI. METHODS Thirty-eight women with clinically suspected uterine cervical cancer recurrence more than six months after treatment completion were examined with 1.5 Tesla MRI including T2W, DCE, and DWI sequences. Disease was confirmed histologically and correlated with MRI findings. The diagnostic performance of T2W imaging and its combination with either DCE or DWI were analyzed. Sensitivity, positive predictive value, and accuracy were calculated. RESULTS Thirty-six women had histologically proven recurrence. The accuracy for recurrence detection was 80% with T2W/DCE MRI and 92.1% with T2W/DWI. The addition of DCE sequences did not significantly improve the diagnostic ability of T2W imaging, and this sequence combination misclassified two patients as falsely positive and seven as falsely negative. The T2W/DWI combination revealed a positive predictive value of 100% and only three false negatives. CONCLUSION The addition of DWI to T2W sequences considerably improved the diagnostic ability of MRI. Our results support the inclusion of DWI in the initial MRI protocol for the detection of cervical cancer recurrence, leaving DCE sequences as an option for uncertain cases.