950 resultados para Initial Value Problem
Resumo:
Research has identified a number of putative risk factors that places adolescents at incrementally higher risk for involvement in alcohol and other drug (AOD) use and sexual risk behaviors (SRBs). Such factors include personality characteristics such as sensation-seeking, cognitive factors such as positive expectancies and inhibition conflict as well as peer norm processes. The current study was guided by a conceptual perspective that support the notion that an integrative framework that includes multi-level factors has significant explanatory value for understanding processes associated with the co-occurrence of AOD use and sexual risk behavior outcomes. This study evaluated simultaneously the mediating role of AOD-sex related expectancies and inhibition conflict on antecedents of AOD use and SRBs including sexual sensation-seeking and peer norms for condom use.^ The sample was drawn from the Enhancing My Personal Options While Evaluating Risk (EMPOWER: Jonathan Tubman, PI), data set (N = 396; aged 12-18 years). Measures used in the study included Sexual Sensation-Seeking Scale, Inhibition Conflict for Condom Use, Risky Sex Scale. All relevant measures had well-documented psychometric properties. A global assessment of alcohol, drug use and sexual risk behaviors was used.^ Results demonstrated that AOD-sex related expectancies mediated the influence of sexual sensation-seeking on the co-occurrence of alcohol and other drug use and sexual risk behaviors. The evaluation of the integrative model also revealed that sexual sensation-seeking was positively associated with peer norms for condom use. Also, peer norms predicted inhibition conflict among this sample of multi-problem youth. ^ This dissertation research identified mechanisms of risk and protection associated with the co-occurrence of AOD use and SRBs among a multi-problem sample of adolescents receiving treatment for alcohol or drug use and related problems. This study is informative for adolescent-serving programs that address those individual and contextual characteristics that enhance treatment efficacy and effectiveness among adolescents receiving substance use and related problems services.^
Resumo:
The span of control is the most discussed single concept in classical and modern management theory. In specifying conditions for organizational effectiveness, the span of control has generally been regarded as a critical factor. Existing research work has focused mainly on qualitative methods to analyze this concept, for example heuristic rules based on experiences and/or intuition. This research takes a quantitative approach to this problem and formulates it as a binary integer model, which is used as a tool to study the organizational design issue. This model considers a range of requirements affecting management and supervision of a given set of jobs in a company. These decision variables include allocation of jobs to workers, considering complexity and compatibility of each job with respect to workers, and the requirement of management for planning, execution, training, and control activities in a hierarchical organization. The objective of the model is minimal operations cost, which is the sum of supervision costs at each level of the hierarchy, and the costs of workers assigned to jobs. The model is intended for application in the make-to-order industries as a design tool. It could also be applied to make-to-stock companies as an evaluation tool, to assess the optimality of their current organizational structure. Extensive experiments were conducted to validate the model, to study its behavior, and to evaluate the impact of changing parameters with practical problems. This research proposes a meta-heuristic approach to solving large-size problems, based on the concept of greedy algorithms and the Meta-RaPS algorithm. The proposed heuristic was evaluated with two measures of performance: solution quality and computational speed. The quality is assessed by comparing the obtained objective function value to the one achieved by the optimal solution. The computational efficiency is assessed by comparing the computer time used by the proposed heuristic to the time taken by a commercial software system. Test results show the proposed heuristic procedure generates good solutions in a time-efficient manner.
Resumo:
This thesis extended previous research on critical decision making and problem solving by refining and validating a measure designed to assess the use of critical thinking and critical discussion in sociomoral dilemmas. The purpose of this thesis was twofold: 1) to refine the administration of the Critical Thinking Subscale of the CDP to elicit more adequate responses and for purposes of refining the coding and scoring procedures for the total measure, and 2) to collect preliminary data on the initial reliabilities of the measure. Subjects consisted of 40 undergraduate students at Florida International University. Results indicate that the use of longer probes on the Critical Thinking Subscale was more effective in eliciting adequate responses necessary for coding and evaluating the subjects performance. Analyses on the psychometric properties of the measure consisted of test-retest reliability and inter-rater reliability.
Resumo:
This study explored the relationship between workplace discrimination climate on team effectiveness through three serial mediators: collective value congruence, team cohesion, and collective affective commitment. As more individuals of marginalized groups diversify the workforce and as more organizations move toward team-based work (Cannon-Bowers & Bowers, 2010), it is imperative to understand how employees perceive their organization’s discriminatory climate as well as its effect on teams. An archival dataset consisting of 6,824 respondents was used, resulting in 332 work teams with five or more members in each. The data were collected as part of an employee climate survey administered in 2011 throughout the United States’ Department of Defense. The results revealed that the indirect effect through M1 (collective value congruence) and M2 (team cohesion) best accounted for the relationship between workplace discrimination climate (X) and team effectiveness (Y). Meaning, on average, teams that reported a greater climate for workplace discrimination also reported less collective value congruence with their organization (a1 = -1.07, p < .001). With less shared perceptions of value congruence, there is less team cohesion (d21 = .45, p < .001), and with less team cohesion there is less team effectiveness (b2 = .57, p < .001). In addition, because of theoretical overlap, this study makes the case for studying workplace discrimination under the broader construct of workplace aggression within the I/O psychology literature. Exploratory and confirmatory factor analysis found that workplace discrimination based on five types of marginalized groups: race/ethnicity, gender, religion, age, and disability was best explained by a three-factor model, including: career obstruction based on age and disability bias (CO), verbal aggression based on multiple types of bias (VA), and differential treatment based on racial/ethnic bias (DT). There was initial support to claim that workplace discrimination items covary not only based on type, but also based on form (i.e., nonviolent aggressive behaviors). Therefore, the form of workplace discrimination is just as important as the type when studying climate perceptions and team-level effects. Theoretical and organizational implications are also discussed.
Resumo:
Research has identified a number of putative risk factors that places adolescents at incrementally higher risk for involvement in alcohol and other drug (AOD) use and sexual risk behaviors (SRBs). Such factors include personality characteristics such as sensation-seeking, cognitive factors such as positive expectancies and inhibition conflict as well as peer norm processes. The current study was guided by a conceptual perspective that support the notion that an integrative framework that includes multi-level factors has significant explanatory value for understanding processes associated with the co-occurrence of AOD use and sexual risk behavior outcomes. This study evaluated simultaneously the mediating role of AOD-sex related expectancies and inhibition conflict on antecedents of AOD use and SRBs including sexual sensation-seeking and peer norms for condom use. The sample was drawn from the Enhancing My Personal Options While Evaluating Risk (EMPOWER: Jonathan Tubman, PI), data set (N = 396; aged 12-18 years). Measures used in the study included Sexual Sensation-Seeking Scale, Inhibition Conflict for Condom Use, Risky Sex Scale. All relevant measures had well-documented psychometric properties. A global assessment of alcohol, drug use and sexual risk behaviors was used. Results demonstrated that AOD-sex related expectancies mediated the influence of sexual sensation-seeking on the co-occurrence of alcohol and other drug use and sexual risk behaviors. The evaluation of the integrative model also revealed that sexual sensation-seeking was positively associated with peer norms for condom use. Also, peer norms predicted inhibition conflict among this sample of multi-problem youth. This dissertation research identified mechanisms of risk and protection associated with the co-occurrence of AOD use and SRBs among a multi-problem sample of adolescents receiving treatment for alcohol or drug use and related problems. This study is informative for adolescent-serving programs that address those individual and contextual characteristics that enhance treatment efficacy and effectiveness among adolescents receiving substance use and related problems services.
Resumo:
This paper presents the summary of the key objectives, instrumentation and logistic details, goals, and initial scientific findings of the European Marie Curie Action SAPUSS project carried out in the western Mediterranean Basin (WMB) during September-October in autumn 2010. The key SAPUSS objective is to deduce aerosol source characteristics and to understand the atmospheric processes responsible for their generations and transformations - both horizontally and vertically in the Mediterranean urban environment. In order to achieve so, the unique approach of SAPUSS is the concurrent measurements of aerosols with multiple techniques occurring simultaneously in six monitoring sites around the city of Barcelona (NE Spain): a main road traffic site, two urban background sites, a regional background site and two urban tower sites (150 m and 545 m above sea level, 150 m and 80 m above ground, respectively). SAPUSS allows us to advance our knowledge sensibly of the atmospheric chemistry and physics of the urban Mediterranean environment. This is well achieved only because of both the three dimensional spatial scale and the high sampling time resolution used. During SAPUSS different meteorological regimes were encountered, including warm Saharan, cold Atlantic, wet European and stagnant regional ones. The different meteorology of such regimes is herein described. Additionally, we report the trends of the parameters regulated by air quality purposes (both gaseous and aerosol mass concentrations); and we also compare the six monitoring sites. High levels of traffic-related gaseous pollutants were measured at the urban ground level monitoring sites, whereas layers of tropospheric ozone were recorded at tower levels. Particularly, tower level night-time average ozone concentrations (80 +/- 25 mu g m(-3)) were up to double compared to ground level ones. The examination of the vertical profiles clearly shows the predominant influence of NOx on ozone concentrations, and a source of ozone aloft. Analysis of the particulate matter (PM) mass concentrations shows an enhancement of coarse particles (PM2.5-10) at the urban ground level (+64 %, average 11.7 mu g m(-3)) but of fine ones (PM1) at urban tower level (+28 %, average 14.4 mu g m(-3)). These results show complex dynamics of the size-resolved PM mass at both horizontal and vertical levels of the study area. Preliminary modelling findings reveal an underestimation of the fine accumulation aerosols. In summary, this paper lays the foundation of SAPUSS, an integrated study of relevance to many other similar urban Mediterranean coastal environment sites.
Resumo:
Marketers have long looked for observables that could explain differences in consumer behavior. Initial attempts have centered on demographic factors, such as age, gender, and race. Although such variables are able to provide some useful information for segmentation (Bass, Tigert, and Longdale 1968), more recent studies have shown that variables that tap into consumers’ social classes and personal values have more predictive accuracy and also provide deeper insights into consumer behavior. I argue that one demographic construct, religion, merits further consideration as a factor that has a profound impact on consumer behavior. In this dissertation, I focus on two types of religious guidance that may influence consumer behaviors: religious teachings (being content with one’s belongings), and religious problem-solving styles (reliance on God).
Essay 1 focuses on the well-established endowment effect and introduces a new moderator (religious teachings on contentment) that influences both owner and buyers’ pricing behaviors. Through fifteen experiments, I demonstrate that when people are primed with religion or characterized by stronger religious beliefs, they tend to value their belongings more than people who are not primed with religion or who have weaker religious beliefs. These effects are caused by religious teachings on being content with one’s belongings, which lead to the overvaluation of one’s own possessions.
Essay 2 focuses on self-control behaviors, specifically healthy eating, and introduces a new moderator (God’s role in the decision-making process) that determines the relationship between religiosity and the healthiness of food choices. My findings demonstrate that consumers who indicate that they defer to God in their decision-making make unhealthier food choices as their religiosity increases. The opposite is true for consumers who rely entirely on themselves. Importantly, this relationship is mediated by the consumer’s consideration of future consequences. This essay provides an explanation to the existing mixed findings on the relationship between religiosity and obesity.
Resumo:
Four experiments investigated whether the testing effect also applies to the acquisition of problem-solving skills from worked examples. Experiment 1 (n=120) showed no beneficial effects of testing consisting of isomorphic problem solving or example recall on final test performance, which consisted of isomorphic problem solving, compared to continued study of isomorphic examples. Experiment 2 (n=124) showed no beneficial effects of testing consisting of identical problem solving compared to restudying an identical example. Interestingly, participants who took both an immediate and a delayed final test outperformed those taking only a delayed test. This finding suggested that testing might become beneficial for retention but only after a certain level of schema acquisition has taken place through restudying several examples. However, experiment 2 had no control condition restudying examples instead of taking the immediate test. Experiment 3 (n=129) included such a restudy condition, and there was no evidence that testing after studying four examples was more effective for final delayed test performance than restudying, regardless of whether restudied/tested problems were isomorphic or identical. Experiment 4 (n=75) used a similar design as experiment 3 (i.e., testing/restudy after four examples), but with examples on a different topic and with a different participant population. Again, no evidence of a testing effect was found. Thus, across four experiments, with different types of initial tests, different problem-solving domains, and different participant populations, we found no evidence that testing enhanced delayed test performance compared to restudy. These findings suggest that the testing effect might not apply to acquiring problem-solving skills from worked examples
Resumo:
Highly swellable polymer films doped with Ag nanoparticle aggregates (poly-SERS films) have been used to record very high signal:noise ratio, reproducible surface-enhanced (resonance) Raman (SER(R)S) spectra of in situ dried ink lines and their constituent dyes using both 633 and 785 nm excitation. These allowed the chemical origins of differences in the SERRS spectra of different inks to be determined. Initial investigation of pure samples of the 10 most common blue dyes showed that the dyes which had very similar chemical structures such as Patent Blue V and Patent Blue VF (which differ only by a single OH group) gave SERRS spectra in which the only indications that the dye structure had been changed were small differences in peak positions or relative intensities of the bands. SERRS studies of 13 gel pen inks were consistent with this observation. In some cases inks from different types of pens could be distinguished even though they were dominated by a single dye such as Victoria Blue B (Zebra Surari) or Victoria Blue BO (Pilot Acroball) because their predominant dye did not appear in other inks. Conversely, identical spectra were also recorded from different types of pens (Pilot G7, Zebra Z-grip) because they all had the same dominant Brilliant Blue G dye. Finally, some of the inks contained mixtures of dyes which could be separated by TLC and removed from the plate before being analysed with the same poly-SERS films. For example, the Pentel EnerGel ink pen was found to give TLC spots corresponding to Erioglaucine and Brilliant Blue G. Overall, this study has shown that the spectral differences between different inks which are based on chemically similar, but nonetheless distinct dyes, are extremely small, so very close matches between SERRS spectra are required for confident identification. Poly-SERS substrates can routinely provide the very stringent reproducibility and sensitivity levels required. This, coupled with the awareness of the reasons underlying the observed differences between similarly coloured inks allows a more confident assessment of the evidential value of inks SERS and should underpin adoption of this approach as a routine method for the forensic examination of inks.
Resumo:
Traditional heuristic approaches to the Examination Timetabling Problem normally utilize a stochastic method during Optimization for the selection of the next examination to be considered for timetabling within the neighbourhood search process. This paper presents a technique whereby the stochastic method has been augmented with information from a weighted list gathered during the initial adaptive construction phase, with the purpose of intelligently directing examination selection. In addition, a Reinforcement Learning technique has been adapted to identify the most effective portions of the weighted list in terms of facilitating the greatest potential for overall solution improvement. The technique is tested against the 2007 International Timetabling Competition datasets with solutions generated within a time frame specified by the competition organizers. The results generated are better than those of the competition winner in seven of the twelve examinations, while being competitive for the remaining five examinations. This paper also shows experimentally how using reinforcement learning has improved upon our previous technique.
Resumo:
Measures of impact of Higher Education have often neglected the Chinese student view, despite the importance of these students to the UK and Chinese economy. This research paper details the findings of a quantitative survey that was purposively distributed to Chinese graduates who enrolled at the University of Worcester on the Business Management degree between 2004-2011 (n=49). Analysis has been conducted on their skill development throughout their degree, their skill usage in different employment contexts, the value of their degree, and gender differences in skill development and usage. Discrepancies between skill development and usage, between males and females, and with previous research findings are discussed. Future research directions are also specified.
Resumo:
O projeto desenvolvido tem como objetivo principal a melhoria da eficiência na prestação de serviços de reparação de chapa e pintura na Caetano Auto Colisão, através da aplicação de ferramentas associadas à filosofia Lean. Apesar das ferramentas e técnicas lean estarem bem exploradas nas empresas de produção e manufatura, o mesmo não se verifica em relação às empresas da área dos serviços. O Value Stream Mapping é uma ferramenta lean que consiste no mapeamento do fluxo de materiais e informação necessários para a realização das atividades (que acrescentam e não acrescentam valor), desempenhadas pelos colaboradores, fornecedores e distribuidores, desde a obtenção do pedido do cliente até à entrega final do serviço. Através desta ferramenta é possível identificar as atividades que não acrescentam valor para o processo e propor medidas de melhoria que resultem na eliminação ou redução das mesmas. Com base neste conceito, foi realizado o mapeamento do processo de prestação de serviços de chapa e pintura e identificados os focos de ineficiência. A partir desta análise foram sugeridas melhorias que têm como objetivo atingir o estado futuro proposto assim como tornar o processo mais eficiente. Duas destas melhorias passaram pela implementação dos 5S na sala das tintas e pela elaboração de um relatório A3 para o centro de lavagens. O projeto realizado permitiu o estudo de um problema real numa empresa de serviços, bem como a proposta de um conjunto de melhorias que a médio prazo se espera virem a contribuir para a melhoria da eficiência na prestação de serviços de chapa e pintura.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
Abstract- A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. Unlike our previous work that used GAs to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
Resumo:
Australian forest industries have a long history of export trade of a wide range of products from woodchips (for paper manufacturing), sandalwood (essential oils, carving and incense) to high value musical instruments, flooring and outdoor furniture. For the high value group, fluctuating environmental conditions brought on by changes in temperature and relative humidity, can lead to performance problems due to consequential swelling, shrinkage and/or distortion of the wood elements. A survey determined the types of value-added products exported, including species and dimensions packaging used and export markets. Data loggers were installed with shipments to monitor temperature and relative humidity conditions. These data were converted to timber equilibrium moisture content values to provide an indication of the environment that the wood elements would be acclimatising to. The results of the initial survey indicated that primary high value wood export products included guitars, flooring, decking and outdoor furniture. The destination markets were mainly located in the northern hemisphere, particularly the United States of America, China, Hong Kong, Europe (including the United Kingdom), Japan, Korea and the Middle East. Other regions importing Australian-made wooden articles were south-east Asia, New Zealand and South Africa. Different timber species have differing rates of swelling and shrinkage, so the types of timber were also recorded during the survey. Results from this work determined that the major species were ash-type eucalypts from south-eastern Australia (commonly referred to in the market as Tasmanian oak), jarrah from Western Australia, spotted gum, hoop pine, white cypress, black butt, brush box and Sydney blue gum from Queensland and New South Wales. The environmental conditions data indicated that microclimates in shipping containers can fluctuate extensively during shipping. Conditions at the time of manufacturing were usually between 10 and 12% equilibrium moisture content, however conditions during shipping could range from 5 (very dry) to 20% (very humid). The packaging systems incorporated were reported to be efficient at protecting the wooden articles from damage during transit. The research highlighted the potential risk for wood components to ‘move’ in response to periods of drier or more humid conditions than those at the time of manufacturing, and the importance of engineering a packaging system that can account for the environmental conditions experienced in shipping containers. Examples of potential dimensional changes in wooden components were calculated based on published unit shrinkage data for key species and the climatic data returned from the logging equipment. The information highlighted the importance of good design to account for possible timber movement during shipping. A timber movement calculator was developed to allow designers to input component species, dimensions, site of manufacture and destination, to see validate their product design.