942 resultados para Small files problem


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Interact System Model (ISM) developed by Fisher and Hawes (1971) for the analysis of face-to-face communication during small-group problem solving activities was used to study online communication. This tool proved to be of value in the analysis, but the conversation patterns reported by Fisher (1980) did not fully appear in the online environment. Participants displayed a habit of "being too polite" and not fully voicing their disagreements with ideas posed by others. Thus progress towards task completion was slow and incomplete.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We will discuss several examples and research efforts related to the small world problem and set the ground for our discussion of network theory and social network analysis. Readings: An Experimental Study of the Small World Problem, J. Travers and S. Milgram Sociometry 32 425-443 (1969) [Protected Access] Optional: The Strength of Weak Ties, M.S. Granovetter The American Journal of Sociology 78 1360--1380 (1973) [Protected Access] Optional: Worldwide Buzz: Planetary-Scale Views on an Instant-Messaging Network, J. Leskovec and E. Horvitz MSR-TR-2006-186. Microsoft Research, June 2007. [Web Link, the most recent and comprehensive study on the subject!] Originally from: http://kmi.tugraz.at/staff/markus/courses/SS2008/707.000_web-science/

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The iRODS system, created by the San Diego Supercomputing Centre, is a rule oriented data management system that allows the user to create sets of rules to define how the data is to be managed. Each rule corresponds to a particular action or operation (such as checksumming a file) and the system is flexible enough to allow the user to create new rules for new types of operations. The iRODS system can interface to any storage system (provided an iRODS driver is built for that system) and relies on its’ metadata catalogue to provide a virtual file-system that can handle files of any size and type. However, some storage systems (such as tape systems) do not handle small files efficiently and prefer small files to be packaged up (or “bundled”) into larger units. We have developed a system that can bundle small data files of any type into larger units - mounted collections. The system can create collection families and contains its’ own extensible metadata, including metadata on which family the collection belongs to. The mounted collection system can work standalone and is being incorporated into the iRODS system to enhance the systems flexibility to handle small files. In this paper we describe the motivation for creating a mounted collection system, its’ architecture and how it has been incorporated into the iRODS system. We describe different technologies used to create the mounted collection system and provide some performance numbers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Since 2010, the client base of online-trading service providers has grown significantly. Such companies enable small investors to access the stock market at advantageous rates. Because small investors buy and sell stocks in moderate amounts, they should consider fixed transaction costs, integral transaction units, and dividends when selecting their portfolio. In this paper, we consider the small investor’s problem of investing capital in stocks in a way that maximizes the expected portfolio return and guarantees that the portfolio risk does not exceed a prescribed risk level. Portfolio-optimization models known from the literature are in general designed for institutional investors and do not consider the specific constraints of small investors. We therefore extend four well-known portfolio-optimization models to make them applicable for small investors. We consider one nonlinear model that uses variance as a risk measure and three linear models that use the mean absolute deviation from the portfolio return, the maximum loss, and the conditional value-at-risk as risk measures. We extend all models to consider piecewise-constant transaction costs, integral transaction units, and dividends. In an out-of-sample experiment based on Swiss stock-market data and the cost structure of the online-trading service provider Swissquote, we apply both the basic models and the extended models; the former represent the perspective of an institutional investor, and the latter the perspective of a small investor. The basic models compute portfolios that yield on average a slightly higher return than the portfolios computed with the extended models. However, all generated portfolios yield on average a higher return than the Swiss performance index. There are considerable differences between the four risk measures with respect to the mean realized portfolio return and the standard deviation of the realized portfolio return.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Since 2010, the client base of online-trading service providers has grown significantly. Such companies enable small investors to access the stock market at advantageous rates. Because small investors buy and sell stocks in moderate amounts, they should consider fixed transaction costs, integral transaction units, and dividends when selecting their portfolio. In this paper, we consider the small investor’s problem of investing capital in stocks in a way that maximizes the expected portfolio return and guarantees that the portfolio risk does not exceed a prescribed risk level. Portfolio-optimization models known from the literature are in general designed for institutional investors and do not consider the specific constraints of small investors. We therefore extend four well-known portfolio-optimization models to make them applicable for small investors. We consider one nonlinear model that uses variance as a risk measure and three linear models that use the mean absolute deviation from the portfolio return, the maximum loss, and the conditional value-at-risk as risk measures. We extend all models to consider piecewise-constant transaction costs, integral transaction units, and dividends. In an out-of-sample experiment based on Swiss stock-market data and the cost structure of the online-trading service provider Swissquote, we apply both the basic models and the extended models; the former represent the perspective of an institutional investor, and the latter the perspective of a small investor. The basic models compute portfolios that yield on average a slightly higher return than the portfolios computed with the extended models. However, all generated portfolios yield on average a higher return than the Swiss performance index. There are considerable differences between the four risk measures with respect to the mean realized portfolio return and the standard deviation of the realized portfolio return.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The re-entrant flow shop scheduling problem (RFSP) is regarded as a NP-hard problem and attracted the attention of both researchers and industry. Current approach attempts to minimize the makespan of RFSP without considering the interdependency between the resource constraints and the re-entrant probability. This paper proposed Multi-level genetic algorithm (GA) by including the co-related re-entrant possibility and production mode in multi-level chromosome encoding. Repair operator is incorporated in the Multi-level genetic algorithm so as to revise the infeasible solution by resolving the resource conflict. With the objective of minimizing the makespan, Multi-level genetic algorithm (GA) is proposed and ANOVA is used to fine tune the parameter setting of GA. The experiment shows that the proposed approach is more effective to find the near-optimal schedule than the simulated annealing algorithm for both small-size problem and large-size problem. © 2013 Published by Elsevier Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The focus of this thesis is placed on text data compression based on the fundamental coding scheme referred to as the American Standard Code for Information Interchange or ASCII. The research objective is the development of software algorithms that result in significant compression of text data. Past and current compression techniques have been thoroughly reviewed to ensure proper contrast between the compression results of the proposed technique with those of existing ones. The research problem is based on the need to achieve higher compression of text files in order to save valuable memory space and increase the transmission rate of these text files. It was deemed necessary that the compression algorithm to be developed would have to be effective even for small files and be able to contend with uncommon words as they are dynamically included in the dictionary once they are encountered. A critical design aspect of this compression technique is its compatibility to existing compression techniques. In other words, the developed algorithm can be used in conjunction with existing techniques to yield even higher compression ratios. This thesis demonstrates such capabilities and such outcomes, and the research objective of achieving higher compression ratio is attained.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study considers the scheduling problem observed in the burn-in operation of semiconductor final testing, where jobs are associated with release times, due dates, processing times, sizes, and non-agreeable release times and due dates. The burn-in oven is modeled as a batch-processing machine which can process a batch of several jobs as long as the total sizes of the jobs do not exceed the machine capacity and the processing time of a batch is equal to the longest time among all the jobs in the batch. Due to the importance of on-time delivery in semiconductor manufacturing, the objective measure of this problem is to minimize total weighted tardiness. We have formulated the scheduling problem into an integer linear programming model and empirically show its computational intractability. Due to the computational intractability, we propose a few simple greedy heuristic algorithms and meta-heuristic algorithm, simulated annealing (SA). A series of computational experiments are conducted to evaluate the performance of the proposed heuristic algorithms in comparison with exact solution on various small-size problem instances and in comparison with estimated optimal solution on various real-life large size problem instances. The computational results show that the SA algorithm, with initial solution obtained using our own proposed greedy heuristic algorithm, consistently finds a robust solution in a reasonable amount of computation time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we address a scheduling problem for minimising total weighted tardiness. The motivation for the paper comes from the automobile gear manufacturing process. We consider the bottleneck operation of heat treatment stage of gear manufacturing. Real life scenarios like unequal release times, incompatible job families, non-identical job sizes and allowance for job splitting have been considered. A mathematical model taking into account dynamic starting conditions has been developed. Due to the NP-hard nature of the problem, a few heuristic algorithms have been proposed. The performance of the proposed heuristic algorithms is evaluated: (a) in comparison with optimal solution for small size problem instances, and (b) in comparison with `estimated optimal solution' for large size problem instances. Extensive computational analyses reveal that the proposed heuristic algorithms are capable of consistently obtaining near-optimal solutions (that is, statistically estimated one) in very reasonable computational time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we address a scheduling problem for minimizing total weighted flowtime, observed in automobile gear manufacturing. Specifically, the bottleneck operation of the pre-heat treatment stage of gear manufacturing process has been dealt with in scheduling. Many real-life scenarios like unequal release times, sequence dependent setup times, and machine eligibility restrictions have been considered. A mathematical model taking into account dynamic starting conditions has been proposed. The problem is derived to be NP-hard. To approach the problem, a few heuristic algorithms have been proposed. Based on planned computational experiments, the performance of the proposed heuristic algorithms is evaluated: (a) in comparison with optimal solution for small-size problem instances and (b) in comparison with the estimated optimal solution for large-size problem instances. Extensive computational analyses reveal that the proposed heuristic algorithms are capable of consistently yielding near-statistically estimated optimal solutions in a reasonable computational time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis considers in detail the dynamics of two oscillators with weak nonlinear coupling. There are three classes of such problems: non-resonant, where the Poincaré procedure is valid to the order considered; weakly resonant, where the Poincaré procedure breaks down because small divisors appear (but do not affect the O(1) term) and strongly resonant, where small divisors appear and lead to O(1) corrections. A perturbation method based on Cole's two-timing procedure is introduced. It avoids the small divisor problem in a straightforward manner, gives accurate answers which are valid for long times, and appears capable of handling all three types of problems with no change in the basic approach.

One example of each type is studied with the aid of this procedure: for the nonresonant case the answer is equivalent to the Poincaré result; for the weakly resonant case the analytic form of the answer is found to depend (smoothly) on the difference between the initial energies of the two oscillators; for the strongly resonant case we find that the amplitudes of the two oscillators vary slowly with time as elliptic functions of ϵ t, where ϵ is the (small) coupling parameter.

Our results suggest that, as one might expect, the dynamical behavior of such systems varies smoothly with changes in the ratio of the fundamental frequencies of the two oscillators. Thus the pathological behavior of Whittaker's adelphic integrals as the frequency ratio is varied appears to be due to the fact that Whittaker ignored the small divisor problem. The energy sharing properties of these systems appear to depend strongly on the initial conditions, so that the systems not ergodic.

The perturbation procedure appears to be applicable to a wide variety of other problems in addition to those considered here.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper investigates the gene selection problem for microarray data with small samples and variant correlation. Most existing algorithms usually require expensive computational effort, especially under thousands of gene conditions. The main objective of this paper is to effectively select the most informative genes from microarray data, while making the computational expenses affordable. This is achieved by proposing a novel forward gene selection algorithm (FGSA). To overcome the small samples' problem, the augmented data technique is firstly employed to produce an augmented data set. Taking inspiration from other gene selection methods, the L2-norm penalty is then introduced into the recently proposed fast regression algorithm to achieve the group selection ability. Finally, by defining a proper regression context, the proposed method can be fast implemented in the software, which significantly reduces computational burden. Both computational complexity analysis and simulation results confirm the effectiveness of the proposed algorithm in comparison with other approaches

Relevância:

80.00% 80.00%

Publicador:

Resumo:

ABSTRACT - Objectives: We attempted to show how the implementation of the key elements of the World Health Organization Patient Safety Curriculum Guide Multi-professional Edition in an undergraduate curriculum affected the knowledge, skills, and attitudes towards patient safety in a graduate entry Portuguese Medical School. Methods: After receiving formal recognition by the WHO as a Complementary Test Site and approval of the organizational ethics committee , the validated pre-course questionnaires measuring the knowledge, skills, and attitudes to patient safety were administered to the 2nd and3rd year students pursuing a four-year course (N = 46). The key modules of the curriculum were implemented over the academic year by employing a variety of learning strategies including expert lecturers, small group problem-based teaching sessions, and Simulation Laboratory sessions. The identical questionnaires were then administered and the impact was measured. The Curriculum Guide was evaluated as a health education tool in this context. Results: A significant number of the respondents, 47 % (n = 22), reported having received some form of prior patient safety training. The effect on Patient Safety Knowledge was assessed by using the percentage of correct pre- and post-course answers to construct 2 × 2 contingency tables and by applying Fishers’ test (two-tailed). No significant differences were detected (p < 0.05). To assess the effect of the intervention on Patient Safety skills and attitudes, the mean and standard deviation were calculated for the pre and post-course responses, and independent samples were subjected to Mann-Whitney’s test. The attitudinal survey indicated a very high baseline incidence of desirable attitudes and skills toward patient safety. Significant changes were detected (p < 0.05) regarding what should happen if an error is made (p = 0.016), the role of healthcare organizations in error reporting (p = 0.006), and the extent of medical error (p = 0.005). Conclusions: The implementation of selected modules of the WHO Patient Safety Curriculum was associated with a number of positive changes regarding patient safety skills and attitudes, with a baseline incidence of highly desirable patient safety attitudes, but no measureable change on the patient safety knowledge, at the University of Algarve Medical School. The significance of these results is discussed along with implications and suggestions for future research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The goal ofthis research was to gain an understanding ofthe process ofprofessional socialization by accessing role meaning ofstudents engaged in a BScN program. Students from each ofthe four years and faculty members from the school ofnursing volunteered as participants. G. Kelly's (1955) Personal Construct Theory provided the framework to determine awareness and constructed meanings. A reflective tool, called LifeMapping, was adapted and utilized to relate student experiences within education that have attributed to nurse role meaning. Focus group interviews verified data interpretation. Students are informed oftheir choice to study nursing through part-time and volunteer work, secondary school cooperative placements. Descriptions reveal that choices are tested and both positive and negative aspects ofthe role observed. Bipolar images of good and bad nurses seem to be context-related. These images may establish biases in choices related to learning experiences. The person inside ofeach aspiring nurse interprets, revises and understands experiences to incorporate individual meaning into their value and belief structures. Students are aware ofchanges and descnbe them as developments that occur personally up to Year ill and role-image changes that begin in Year II. The major difficulty that students encountered was descnbed as negative attitudes towards their anticipated role. Humanistic-interactionist philosophies are echoed in student accounts of learning experiences. Growth and role development corresponds to process factors of small group, problem-base learning.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ce mémoire présente une analyse comparative des paramètres atmosphériques obtenus à l’aide des techniques photométrique et spectroscopique. Pour y parvenir, les données photométriques et spectroscopiques de 1375 naines blanches de type DA tirées du Sloan Digital Sky Survey (SDSS) ainsi que les données spectroscopiques du Villanova White Dwarf Catalog ont été utilisées. Il a d’abord fallu s’assurer que les données photométriques et spectroscopiques étaient bien calibrées. L’analyse photométrique a démontré que la photométrie ugriz ne semblait pas avoir de problème de calibration autre que le décalage des points zéro, qui est compensé en appliquant les corrections photométriques appropriées. De plus, le fait que le filtre u laisse passer le flux à certaines longueurs d’onde dans le rouge ne semble pas affecter la détermination des paramètres atmosphériques. L’analyse spectroscopique a ensuite confirmé que l’application de fonctions de correction permettant de tenir compte des effets hydrodynamiques 3D est la solution au problème de log g élevés. La comparaison des informations tirées des données spectroscopiques des deux différentes sources suggère que la calibration des spectres du SDSS n’est toujours pas au point. Les paramètres atmosphériques déterminés à l’aide des deux techniques ont ensuite été comparés et les températures photométriques sont systématiquement plus faibles que celles obtenues à partir des données spectroscopiques. Cet effet systématique pourrait être causé par les profils de raies utilisés dans les modèles d’atmosphère. Une méthode permettant d’obtenir une estimation de la gravité de surface d’une naine blanche à partir de sa photométrie a aussi été développée.