28 resultados para Empirical Algorithm Analysis
em Instituto Politécnico do Porto, Portugal
Resumo:
Considerado como best paper desta conferência.
Resumo:
The purpose of this paper is to identify the different types of motivations in hospital volunteers. We present a literature review about different types of motivation and we collect data from hospital volunteers through a questionnaire. Four different motivation categories are identified: development and learning, altruism, career recognition and belonging and protection. The main motivations expressed are development and learning, followed by altruism. Belonging and protection, followed by career recognition are the least cited motivations. Career recognition is negatively correlated with age, and belonging/ protection is negatively correlated with education. That is, younger volunteers present more career recognition motives and less educated volunteers have more from protection and belonging. This study encompasses hospital volunteers and their motivations. The paper is useful to policy makers aiming to develop targeted approaches to attracting and retaining volunteers.
Resumo:
The process of resources systems selection takes an important part in Distributed/Agile/Virtual Enterprises (D/A/V Es) integration. However, the resources systems selection is still a difficult matter to solve in a D/A/VE, as it is pointed out in this paper. Globally, we can say that the selection problem has been equated from different aspects, originating different kinds of models/algorithms to solve it. In order to assist the development of a web prototype tool (broker tool), intelligent and flexible, that integrates all the selection model activities and tools, and with the capacity to adequate to each D/A/V E project or instance (this is the major goal of our final project), we intend in this paper to show: a formulation of a kind of resources selection problem and the limitations of the algorithms proposed to solve it. We formulate a particular case of the problem as an integer programming, which is solved using simplex and branch and bound algorithms, and identify their performance limitations (in terms of processing time) based on simulation results. These limitations depend on the number of processing tasks and on the number of pre-selected resources per processing tasks, defining the domain of applicability of the algorithms for the problem studied. The limitations detected open the necessity of the application of other kind of algorithms (approximate solution algorithms) outside the domain of applicability founded for the algorithms simulated. However, for a broker tool it is very important the knowledge of algorithms limitations, in order to, based on problem features, develop and select the most suitable algorithm that guarantees a good performance.
Resumo:
“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.
Resumo:
Crowdfunding (CF) is an increasingly attractive source to fund social projects. However, to our best knowledge, the study of CF for social purposes has remained largely unexplored in the literature. This research envisages a detailed examination of the role of CF on the early-stage of the social projects at regional level. By comparing the characteristics of the projects available in the Portuguese Social Stock Exchange (PSSE) platform with others that did not use this source of financial support, we explore its role on regional development. The results we got show that, in most cases, both PSSE and Non-Governmental Organizations projects complemented the services offered by the State or by the private sector. Furthermore, about a quarter of the projects present in PSSE operated in areas that were not being addressed neither by the services offered by the State nor by the ones of the private sector. The results attained show that more recent social ventures have a greater propensity to use PSSE. The same is find in those organizations which work closely with the target audience. We also observed that the use of PSSE was correlated with the geographical scope of the Social Venture. The circumstance of having the social organization acting at a local or regional level seems to be strongly associated with the possibility of using social crowdfunding for financing social projects.
Resumo:
A methodology based on data mining techniques to support the analysis of zonal prices in real transmission networks is proposed in this paper. The mentioned methodology uses clustering algorithms to group the buses in typical classes that include a set of buses with similar LMP values. Two different clustering algorithms have been used to determine the LMP clusters: the two-step and K-means algorithms. In order to evaluate the quality of the partition as well as the best performance algorithm adequacy measurements indices are used. The paper includes a case study using a Locational Marginal Prices (LMP) data base from the California ISO (CAISO) in order to identify zonal prices.
Resumo:
Purpose: The aim of this paper is to highlight the importance of qualitative research within the scope of management scientific studies, referring to its philosophy, nature and instruments. It also confronts it with quantitative methodology, approaching its differences as well as its complementariness and synergies, with the purpose of explaining, from a more analytic point of view, the relevance of qualitative methodology in the course of an authentic and real research despite its complexity. Design/methodology/approach: Regardless of its broad application, one may attest the scarcity literature that focuses on qualitative research applied to the management scientific area, as opposed to the large amount that refers to quantitative research. Findings: The paper shows the influence that qualitative research has on management scientific research. Originality/value:. Qualitative research assumes an important role within qualitative research by allowing for the study and analysis of certain types of phenomena that occur inside organisations, and in respect of which quantitative studies cannot provide an answer.
Resumo:
Purpose: The aim of this paper is to promote qualitative methodology within the scientific community of management. The specific objective is oriented to propose an empirical research process based on case study method. This is to ensure rigor in the empirical research process, that future research may follow a similar procedure to that is proposed. Design/methodology/approach: Following a qualitative methodological approach, we propose a research process that develops according to four phases, each with several stages. This study analyses the preparatory and field work phases and their stages. Findings: The paper shows the influence that case studies have on qualitative empirical research process in management. Originality/value:. Case study method assumes an important role within qualitative research by allowing for the study and analysis of certain types of phenomena that occur inside organisations, and in respect of which quantitative studies cannot provide an answer.
Resumo:
Electricity markets are complex environments, involving a large number of different entities, with specific characteristics and objectives, making their decisions and interacting in a dynamic scene. Game-theory has been widely used to support decisions in competitive environments; therefore its application in electricity markets can prove to be a high potential tool. This paper proposes a new scenario analysis algorithm, which includes the application of game-theory, to evaluate and preview different scenarios and provide players with the ability to strategically react in order to exhibit the behavior that better fits their objectives. This model includes forecasts of competitor players’ actions, to build models of their behavior, in order to define the most probable expected scenarios. Once the scenarios are defined, game theory is applied to support the choice of the action to be performed. Our use of game theory is intended for supporting one specific agent and not for achieving the equilibrium in the market. MASCEM (Multi-Agent System for Competitive Electricity Markets) is a multi-agent electricity market simulator that models market players and simulates their operation in the market. The scenario analysis algorithm has been tested within MASCEM and our experimental findings with a case study based on real data from the Iberian Electricity Market are presented and discussed.
Resumo:
LLF (Least Laxity First) scheduling, which assigns a higher priority to a task with a smaller laxity, has been known as an optimal preemptive scheduling algorithm on a single processor platform. However, little work has been made to illuminate its characteristics upon multiprocessor platforms. In this paper, we identify the dynamics of laxity from the system’s viewpoint and translate the dynamics into LLF multiprocessor schedulability analysis. More specifically, we first characterize laxity properties under LLF scheduling, focusing on laxity dynamics associated with a deadline miss. These laxity dynamics describe a lower bound, which leads to the deadline miss, on the number of tasks of certain laxity values at certain time instants. This lower bound is significant because it represents invariants for highly dynamic system parameters (laxity values). Since the laxity of a task is dependent of the amount of interference of higher-priority tasks, we can then derive a set of conditions to check whether a given task system can go into the laxity dynamics towards a deadline miss. This way, to the author’s best knowledge, we propose the first LLF multiprocessor schedulability test based on its own laxity properties. We also develop an improved schedulability test that exploits slack values. We mathematically prove that the proposed LLF tests dominate the state-of-the-art EDZL tests. We also present simulation results to evaluate schedulability performance of both the original and improved LLF tests in a quantitative manner.
Resumo:
Consider a single processor and a software system. The software system comprises components and interfaces where each component has an associated interface and each component comprises a set of constrained-deadline sporadic tasks. A scheduling algorithm (called global scheduler) determines at each instant which component is active. The active component uses another scheduling algorithm (called local scheduler) to determine which task is selected for execution on the processor. The interface of a component makes certain information about a component visible to other components; the interfaces of all components are used for schedulability analysis. We address the problem of generating an interface for a component based on the tasks inside the component. We desire to (i) incur only a small loss in schedulability analysis due to the interface and (ii) ensure that the amount of space (counted in bits) of the interface is small; this is because such an interface hides as much details of the component as possible. We present an algorithm for generating such an interface.
Resumo:
Power law PL and fractional calculus are two faces of phenomena with long memory behavior. This paper applies PL description to analyze different periods of the business cycle. With such purpose the evolution of ten important stock market indices DAX, Dow Jones, NASDAQ, Nikkei, NYSE, S&P500, SSEC, HSI, TWII, and BSE over time is studied. An evolutionary algorithm is used for the fitting of the PL parameters. It is observed that the PL curve fitting constitutes a good tool for revealing the signal main characteristics leading to the emergence of the global financial dynamic evolution.
Resumo:
This paper studies the chromosome information of twenty five species, namely, mammals, fishes, birds, insects, nematodes, fungus, and one plant. A quantifying scheme inspired in the state space representation of dynamical systems is formulated. Based on this algorithm, the information of each chromosome is converted into a bidimensional distribution. The plots are then analyzed and characterized by means of Shannon entropy. The large volume of information is integrated by averaging the lengths and entropy quantities of each species. The results can be easily visualized revealing quantitative global genomic information.
Resumo:
The IEEE 802.15.4 standard provides appealing features to simultaneously support real-time and non realtime traffic, but it is only capable of supporting real-time communications from at most seven devices. Additionally, it cannot guarantee delay bounds lower than the superframe duration. Motivated by this problem, in this paper we propose an Explicit Guaranteed time slot Sharing and Allocation scheme (EGSA) for beacon-enabled IEEE 802.15.4 networks. This scheme is capable of providing tighter delay bounds for real-time communications by splitting the Contention Free access Period (CFP) into smaller mini time slots and by means of a new guaranteed bandwidth allocation scheme for a set of devices with periodic messages. At the same the novel bandwidth allocation scheme can maximize the duration of the CFP for non real-time communications. Performance analysis results show that the EGSA scheme works efficiently and outperforms competitor schemes both in terms of guaranteed delay and bandwidth utilization.
Resumo:
Consider the problem of scheduling a set of sporadic tasks on a multiprocessor system to meet deadlines using a tasksplitting scheduling algorithm. Task-splitting (also called semipartitioning) scheduling algorithms assign most tasks to just one processor but a few tasks are assigned to two or more processors, and they are dispatched in a way that ensures that a task never executes on two or more processors simultaneously. A certain type of task-splitting algorithms, called slot-based task-splitting, is of particular interest because of its ability to schedule tasks at high processor utilizations. We present a new schedulability analysis for slot-based task-splitting scheduling algorithms that takes the overhead into account and also a new task assignment algorithm.