562 resultados para Under-sampled problem
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.
Resumo:
This paper details the design and performance assessment of a unique collision avoidance decision and control strategy for autonomous vision-based See and Avoid systems. The general approach revolves around re-positioning a collision object in the image using image-based visual servoing, without estimating range or time to collision. The decision strategy thus involves determining where to move the collision object, to induce a safe avoidance manuever, and when to cease the avoidance behaviour. These tasks are accomplished by exploiting human navigation models, spiral motion properties, expected image feature uncertainty and the rules of the air. The result is a simple threshold based system that can be tuned and statistically evaluated by extending performance assessment techniques derived for alerting systems. Our results demonstrate how autonomous vision-only See and Avoid systems may be designed under realistic problem constraints, and then evaluated in a manner consistent to aviation expectations.
Resumo:
Driving under the influence (DUI) is a major road safety problem. Historically, alcohol has been assumed to play a larger role in crashes and DUI education programs have reflected this assumption, although recent evidence suggests that younger drivers are becoming more likely to drive drugged than to drive drunk. This is a study of 7096 Texas clients under age 21 who were admitted to state-funded treatment programs between 1997 and 2007 with a past-year DUI arrest, DUI probation, or DUI referral. Data were obtained from the State’s administrative dataset. Multivariate logistic regressions models were used to understand the differences between those minors entering treatment as a DUI as compared to a non-DUI as well as the risks for completing treatment and for being abstinent in the month prior to follow-up. A major finding was that over time, the primary problem for underage DUI drivers changed from alcohol to marijuana. Being abstinent in the month prior to discharge, having a primary problem with alcohol rather than another drug, and having more family involved were the strongest predictors of treatment completion. Living in a household where the client was exposed to alcohol abuse or drug use, having been in residential treatment, and having more drug and alcohol and family problems were the strongest predictors of not being abstinent at follow-up. As a result, there is a need to direct more attention towards meeting the needs of the young DUI population through programs that address drug as well as alcohol consumption problems.
Resumo:
The concept of "fair basing" is widely acknowledged as a difficult area of patent law. This article maps the development of fair basing law to demonstrate how some of the difficulties have arisen. Part I of the article traces the development of the branches of patent law that were swept under the nomenclature of "fair basing" by British legislation in 1949. It looks at the early courts' approach to patent construction, examines the early origin of fair basing and what it was intended to achiever. Part II of the article considers the modern interpretation of fair basing, which provides a striking contrast to its historical context. Without any consistent judicial approach to construction the doctrine has developed inappropriately, giving rise to both over-strict and over-generous approaches.
Resumo:
In an earlier article the concept of fair basing in Australian patent law was described as a "problem child", often unruly and unpredictable in practice, but nevertheless understandable and useful in policy terms. The article traced the development of several different branches of patent law that were swept under the nomenclature of "fair basing" in Britain in 1949. It then went on to examine the adoption of fair basis into Australian law, the modern interpretation of the requirement, and its problems. This article provides an update. After briefly recapping on the relevant historical issues, it examines the recent Lockwood "internal" fair basing case in the Federal and High Courts.
Resumo:
Many large coal mining operations in Australia rely heavily on the rail network to transport coal from mines to coal terminals at ports for shipment. Over the last few years, due to the fast growing demand, the coal rail network is becoming one of the worst industrial bottlenecks in Australia. As a result, this provides great incentives for pursuing better optimisation and control strategies for the operation of the whole rail transportation system under network and terminal capacity constraints. This PhD research aims to achieve a significant efficiency improvement in a coal rail network on the basis of the development of standard modelling approaches and generic solution techniques. Generally, the train scheduling problem can be modelled as a Blocking Parallel- Machine Job-Shop Scheduling (BPMJSS) problem. In a BPMJSS model for train scheduling, trains and sections respectively are synonymous with jobs and machines and an operation is regarded as the movement/traversal of a train across a section. To begin, an improved shifting bottleneck procedure algorithm combined with metaheuristics has been developed to efficiently solve the Parallel-Machine Job- Shop Scheduling (PMJSS) problems without the blocking conditions. Due to the lack of buffer space, the real-life train scheduling should consider blocking or hold-while-wait constraints, which means that a track section cannot release and must hold a train until the next section on the routing becomes available. As a consequence, the problem has been considered as BPMJSS with the blocking conditions. To develop efficient solution techniques for BPMJSS, extensive studies on the nonclassical scheduling problems regarding the various buffer conditions (i.e. blocking, no-wait, limited-buffer, unlimited-buffer and combined-buffer) have been done. In this procedure, an alternative graph as an extension of the classical disjunctive graph is developed and specially designed for the non-classical scheduling problems such as the blocking flow-shop scheduling (BFSS), no-wait flow-shop scheduling (NWFSS), and blocking job-shop scheduling (BJSS) problems. By exploring the blocking characteristics based on the alternative graph, a new algorithm called the topological-sequence algorithm is developed for solving the non-classical scheduling problems. To indicate the preeminence of the proposed algorithm, we compare it with two known algorithms (i.e. Recursive Procedure and Directed Graph) in the literature. Moreover, we define a new type of non-classical scheduling problem, called combined-buffer flow-shop scheduling (CBFSS), which covers four extreme cases: the classical FSS (FSS) with infinite buffer, the blocking FSS (BFSS) with no buffer, the no-wait FSS (NWFSS) and the limited-buffer FSS (LBFSS). After exploring the structural properties of CBFSS, we propose an innovative constructive algorithm named the LK algorithm to construct the feasible CBFSS schedule. Detailed numerical illustrations for the various cases are presented and analysed. By adjusting only the attributes in the data input, the proposed LK algorithm is generic and enables the construction of the feasible schedules for many types of non-classical scheduling problems with different buffer constraints. Inspired by the shifting bottleneck procedure algorithm for PMJSS and characteristic analysis based on the alternative graph for non-classical scheduling problems, a new constructive algorithm called the Feasibility Satisfaction Procedure (FSP) is proposed to obtain the feasible BPMJSS solution. A real-world train scheduling case is used for illustrating and comparing the PMJSS and BPMJSS models. Some real-life applications including considering the train length, upgrading the track sections, accelerating a tardy train and changing the bottleneck sections are discussed. Furthermore, the BPMJSS model is generalised to be a No-Wait Blocking Parallel- Machine Job-Shop Scheduling (NWBPMJSS) problem for scheduling the trains with priorities, in which prioritised trains such as express passenger trains are considered simultaneously with non-prioritised trains such as freight trains. In this case, no-wait conditions, which are more restrictive constraints than blocking constraints, arise when considering the prioritised trains that should traverse continuously without any interruption or any unplanned pauses because of the high cost of waiting during travel. In comparison, non-prioritised trains are allowed to enter the next section immediately if possible or to remain in a section until the next section on the routing becomes available. Based on the FSP algorithm, a more generic algorithm called the SE algorithm is developed to solve a class of train scheduling problems in terms of different conditions in train scheduling environments. To construct the feasible train schedule, the proposed SE algorithm consists of many individual modules including the feasibility-satisfaction procedure, time-determination procedure, tune-up procedure and conflict-resolve procedure algorithms. To find a good train schedule, a two-stage hybrid heuristic algorithm called the SE-BIH algorithm is developed by combining the constructive heuristic (i.e. the SE algorithm) and the local-search heuristic (i.e. the Best-Insertion- Heuristic algorithm). To optimise the train schedule, a three-stage algorithm called the SE-BIH-TS algorithm is developed by combining the tabu search (TS) metaheuristic with the SE-BIH algorithm. Finally, a case study is performed for a complex real-world coal rail network under network and terminal capacity constraints. The computational results validate that the proposed methodology would be very promising because it can be applied as a fundamental tool for modelling and solving many real-world scheduling problems.
Resumo:
The main focus of this paper is the motion planning problem for a deeply submerged rigid body. The equations of motion are formulated and presented by use of the framework of differential geometry and these equations incorporate external dissipative and restoring forces. We consider a kinematic reduction of the affine connection control system for the rigid body submerged in an ideal fluid, and present an extension of this reduction to the forced affine connection control system for the rigid body submerged in a viscous fluid. The motion planning strategy is based on kinematic motions; the integral curves of rank one kinematic reductions. This method is of particular interest to autonomous underwater vehicles which can not directly control all six degrees of freedom (such as torpedo shaped AUVs) or in case of actuator failure (i.e., under-actuated scenario). A practical example is included to illustrate our technique.
Resumo:
Recommender systems are widely used online to help users find other products, items etc that they may be interested in based on what is known about that user in their profile. Often however user profiles may be short on information and thus it is difficult for a recommender system to make quality recommendations. This problem is known as the cold-start problem. Here we investigate using association rules as a source of information to expand a user profile and thus avoid this problem. Our experiments show that it is possible to use association rules to noticeably improve the performance of a recommender system under the cold-start situation. Furthermore, we also show that the improvement in performance obtained can be achieved while using non-redundant rule sets. This shows that non-redundant rules do not cause a loss of information and are just as informative as a set of association rules that contain redundancy.
Resumo:
The combination of alcohol and driving is a major health and economic burden to most communities in industrialised countries. The total cost of crashes for Australia in 1996 was estimated at approximately 15 billion dollars and the costs for fatal crashes were about 3 billion dollars (BTE, 2000). According to the Bureau of Infrastructure, Transport and Regional Development and Local Government (2009; BITRDLG) the overall cost of road fatality crashes for 2006 $3.87 billion, with a single fatal crash costing an estimated $2.67 million. A major contributing factor to crashes involving serious injury is alcohol intoxication while driving. It is a well documented fact that consumption of liquor impairs judgment of speed, distance and increases involvement in higher risk behaviours (Waller, Hansen, Stutts, & Popkin, 1986a; Waller et al., 1986b). Waller et al. (1986a; b) asserts that liquor impairs psychomotor function and therefore renders the driver impaired in a crisis situation. This impairment includes; vision (degraded), information processing (slowed), steering, and performing two tasks at once in congested traffic (Moskowitz & Burns, 1990). As BAC levels increase the risk of crashing and fatality increase exponentially (Department of Transport and Main Roads, 2009; DTMR). According to Compton et al. (2002) as cited in the Department of Transport and Main Roads (2009), crash risk based on probability, is five times higher when the BAC is 0.10 compared to a BAC of 0.00. The type of injury patterns sustained also tends to be more severe when liquor is involved, especially with injuries to the brain (Waller et al., 1986b). Single and Rohl (1997) reported that 30% of all fatal crashes in Australia where alcohol involvement was known were associated with Breadth Analysis Content (BAC) above the legal limit of 0.05gms/100ml. Alcohol related crashes therefore contributes to a third of the total cost of fatal crashes (i.e. $1 billion annually) and crashes where alcohol is involved are more likely to result in death or serious injury (ARRB Transport Research, 1999). It is a major concern that a drug capable of impairment such as is the most available and popular drug in Australia (Australian Institute of Health and Welfare, 2007; AIHW). According to the AIHW (2007) 89.9% of the approximately 25,000 Australians over the age of 14 surveyed had consumed at some point in time, and 82.9% had consumed liquor in the previous year. This study found that 12.1% of individuals admitted to driving a motor vehicle whilst intoxicated. In general males consumed more liquor in all age groups. In Queensland there were 21503 road crashes in 2001, involving 324 fatalities and the largest contributing factor was alcohol and or drugs (Road Traffic Report, 2001). 23438 road crashes in 2004, involving 289 fatalities and the largest contributing factor was alcohol and or drugs (DTMR, 2009). Although a number of measures such as random breath testing have been effective in reducing the road toll (Watson, Fraine & Mitchell, 1995) the recidivist drink driver remains a serious problem. These findings were later supported with research by Leal, King, and Lewis (2006). This Queensland study found that of the 24661 drink drivers intercepted in 2004, 3679 (14.9%) were recidivists with multiple drink driving convictions in the previous three years covered (Leal et al., 2006). The legal definition of the term “recidivist” is consistent with the Transport Operations (Road Use Management) Act (1995) and is assigned to individuals who have been charged with multiple drink driving offences in the previous five years. In Australia relatively little attention has been given to prevention programs that target high-risk repeat drink drivers. However, over the last ten years a rehabilitation program specifically designed to reduce recidivism among repeat drink drivers has been operating in Queensland. The program, formally known as the “Under the Limit” drink driving rehabilitation program (UTL) was designed and implemented by the research team at the Centre for Accident Research and Road Safety in Queensland with funding from the Federal Office of Road Safety and the Institute of Criminology (see Sheehan, Schonfeld & Davey, 1995). By 2009 over 8500 drink-drivering offenders had been referred to the program (Australian Institute of Crime, 2009).
Resumo:
This article examines the problem of patent ambush in standard setting, where patent owners are sometimes able to capture industry standards in order to secure monopoly power and windfall profits. Because standardisation generally introduces high switching costs, patent ambush can impose significant costs on downstream manufacturers and consumers and drastically reduce the efficiency gains of standardisation.This article considers how Australian competition law is likely to apply to patent ambush both in the development of a standard (through misrepresenting the existence of an essential patent) and after a standard is implemented (through refusing to license an essential patented technology either at all or on reasonable and non-discriminatory (RAND) terms). This article suggests that non-disclosure of patent interests is unlikely to restrained by Part IV of the Trade Practices Act (TPA), and refusals to license are only likely to be restrained if the refusal involves leveraging or exclusive dealing. By contrast, Standard Setting Organisations (SSOs) which seek to limit this behaviour through private ordering may face considerable scrutiny under the new cartel provisions of the TPA. This article concludes that SSOs may be best advised to implement administrative measures to prevent patent hold-up, such as reviewing which patents are essential for the implementation of a standard, asking patent holders to make their licence conditions public to promote transparency, and establishing forums where patent licensees can complain about licence terms that they consider to be unreasonable or discriminatory. Additionally, the ACCC may play a role in authorising SSO policies that could otherwise breach the new cartel provisions, but which have the practical effect of promoting competition in the standards setting environment.
Resumo:
In an era of complex challenges that draw sustained media attention and entangle multiple organisational actors, this thesis addresses the gap between current trends in society and business, and existing scholarship in public relations and crisis communication. By responding to calls from crisis communication researchers to develop theory (Coombs, 2006a), to examine the interdependencies of crises (Seeger, Sellnow, & Ulmer, 1998), and to consider variation in crisis response (Seeger, 2002), this thesis contributes to theory development in crisis communication and public relations. Through transformative change, this thesis extends existing scholarship built on a preservation or conservation logic where public relations is used to maintain stability by incrementally responding to changes in an organisation‘s environment (Cutlip, Center, & Broom, 2006; Everett, 2001; Grunig, 2000; Spicer, 1997). Based on the opportunity to contribute to ongoing theoretical development in the literature, the overall research problem guiding this thesis asks: How does transformative change during crisis influence corporate actors’ communication? This thesis adopts punctuated equilibrium theory, which describes change as alternating between long periods of stability and short periods of revolutionary or transformative change (Gersick, 1991; Romanelli & Tushman, 1994; Siggelkow, 2002; Tushman, Newman, & Romanelli, 1986; Tushman & Romanelli, 1985). As a theory for change, punctuated equilibrium provides an opportunity to examine public relations and transformative change, building on scholarship that is based primarily on incremental change. Further, existing scholarship in public relations and crisis communication focuses on the actions of single organisations in situational or short-term crisis events. Punctuated equilibrium theory enables the study of multiple crises and multiple organisational responses during transformative change. In doing so, punctuated equilibrium theory provides a framework to explain both the context for transformative change and actions or strategies enacted by organisations during transformative change (Tushman, Newman, & Romanelli, 1986; Tushman & Romanelli, 1985; Tushman, Virany, & Romanelli, 1986). The connections between context and action inform the research questions that guide this thesis: RQ1: What symbolic and substantive strategies persist and change as crises develop from situational events to transformative and multiple linked events? RQ2: What features of the crisis context influence changes in symbolic and substantive strategies? To shed light on these research questions, the thesis adopts a qualitative approach guided by process theory and methods to explicate the events, sequences and activities that were essential to change (Pettigrew, 1992; Van de Ven, 1992). Specifically, the thesis draws on an alternative template strategy (Langley, 1999) that provides several alternative interpretations of the same events (Allison, 1971; Allison & Zelikow, 1999). Following Allison (1971) and Allison and Zelikow (1999), this thesis uses three alternative templates of crisis or strategic response typologies to construct three narratives using media articles and organisational documents. The narratives are compared to identify and draw out different patterns of crisis communication strategies that operate within different crisis contexts. The thesis is based on the crisis events that affected three organisations within the pharmaceutical industry for four years. The primary organisation is Merck, as its product recall crisis triggered transformative change affecting, in different ways, the secondary organisations of Pfizer and Novartis. Three narratives are presented based on the crisis or strategic response typologies of Coombs (2006b), Allen and Caillouet (1994), and Oliver (1991). The findings of this thesis reveal different stories about crisis communication under transformative change. By zooming in to a micro perspective (Nicolini, 2009) to focus on the crisis communication and actions of a single organisation and zooming out to a macro perspective (Nicolini, 2009) to consider multiple organisations, new insights about crisis communication, change and the relationships among multiple organisations are revealed at context and action levels. At the context level, each subsequent narrative demonstrates greater connections among multiple corporate actors. By zooming out from Coombs‘ (2006b) focus on single organisations to consider Allen and Caillouet‘s (1994) integration of the web of corporate actors, the thesis demonstrates how corporate actors add accountability pressures to the primary organisation. Next, by zooming further out to the macro perspective by considering Oliver‘s (1991) strategic responses to institutional processes, the thesis reveals a greater range of corporate actors that are caught up in the process of transformative change and accounts for their varying levels of agency over their environment. By zooming in to a micro perspective and out to a macro perspective (Nicolini, 2009) across alternative templates, the thesis sheds light on sequences, events, and actions of primary and secondary organisations. Although the primary organisation remains the focus of sustained media attention across the four-year time frame, the secondary organisations, even when one faced a similar starting situation to the primary organisation, were buffered by the process of transformative change. This understanding of crisis contexts in transforming environments builds on existing knowledge in crisis communication. At the action level, the thesis also reveals different interpretations from each alternative template. Coombs‘ (2006b) narrative shows persistence in the primary organisation‘s crisis or strategic responses over the four-year time frame of the thesis. That is, the primary organisation consistently applies a diminish crisis response. At times, the primary organisation drew on denial responses when corporate actors questioned its legitimacy or actions. To close the crisis, the primary organisation uses a rebuild crisis posture (Coombs, 2006). These finding are replicated in Allen and Caillouet‘s (1994) narrative, noting this template‘s limitation to communication messages only. Oliver‘s (1991) narrative is consistent with Coombs‘ (2006b) but also demonstrated a shift from a strategic response that signals conformity to the environment to one that signals more active resistance to the environment over time. Specifically, the primary organisation‘s initial response demonstrates conformity but these same messages were used some three years later to set new expectations in the environment in order to shape criteria and build acceptance for future organisational decisions. In summary, the findings demonstrate the power of crisis or strategic responses when considered over time and in the context of transformative change. The conclusions of this research contribute to scholarship in the public relations and management literatures. Based on the significance of organisational theory, the primary contribution of the theory relates to the role of interorganisational linkages or legitimacy buffers that form during the punctuation of equilibrium. The network of linkages among the corporate actors are significant also to the crisis communication literature as they form part of the process model of crisis communication under punctuated equilibrium. This model extends existing research that focuses on crisis communication of single organisations to consider the emergent context that incorporates secondary organisations as well as the localised contests of legitimacy and buffers from regulatory authorities. The thesis also provides an empirical base for punctuated equilibrium in public relations and crisis communication, extending Murphy‘s (2000) introduction of the theory to the public relations literature. In doing this, punctuated equilibrium theory reinvigorates theoretical development in crisis communication by extending existing scholarship around incrementalist approaches and demonstrating how public relations works in the context of transformative change. Further research in this area could consider using alternative templates to study transformative change caused by a range of crisis types from natural disasters to product tampering, and to add further insight into the dynamics between primary and secondary organisations. This thesis contributes to practice by providing guidelines for crisis response strategy selection and indicators related to the emergent context for crises under transformative change that will help primary and secondary organisations‘ responses to crises.
Resumo:
In dynamic and uncertain environments such as healthcare, where the needs of security and information availability are difficult to balance, an access control approach based on a static policy will be suboptimal regardless of how comprehensive it is. The uncertainty stems from the unpredictability of users’ operational needs as well as their private incentives to misuse permissions. In Role Based Access Control (RBAC), a user’s legitimate access request may be denied because its need has not been anticipated by the security administrator. Alternatively, even when the policy is correctly specified an authorised user may accidentally or intentionally misuse the granted permission. This paper introduces a novel approach to access control under uncertainty and presents it in the context of RBAC. By taking insights from the field of economics, in particular the insurance literature, we propose a formal model where the value of resources are explicitly defined and an RBAC policy (entailing those predictable access needs) is only used as a reference point to determine the price each user has to pay for access, as opposed to representing hard and fast rules that are always rigidly applied.
Resumo:
We study the problem of allocating stocks to dark pools. We propose and analyze an optimal approach for allocations, if continuous-valued allocations are allowed. We also propose a modification for the case when only integer-valued allocations are possible. We extend the previous work on this problem to adversarial scenarios, while also improving on their results in the iid setup. The resulting algorithms are efficient, and perform well in simulations under stochastic and adversarial inputs.
Resumo:
In practice, parallel-machine job-shop scheduling (PMJSS) is very useful in the development of standard modelling approaches and generic solution techniques for many real-world scheduling problems. In this paper, based on the analysis of structural properties in an extended disjunctive graph model, a hybrid shifting bottleneck procedure (HSBP) algorithm combined with Tabu Search metaheuristic algorithm is developed to deal with the PMJSS problem. The original-version SBP algorithm for the job-shop scheduling (JSS) has been significantly improved to solve the PMJSS problem with four novelties: i) a topological-sequence algorithm is proposed to decompose the PMJSS problem into a set of single-machine scheduling (SMS) and/or parallel-machine scheduling (PMS) subproblems; ii) a modified Carlier algorithm based on the proposed lemmas and the proofs is developed to solve the SMS subproblem; iii) the Jackson rule is extended to solve the PMS subproblem; iv) a Tabu Search metaheuristic algorithm is embedded under the framework of SBP to optimise the JSS and PMJSS cases. The computational experiments show that the proposed HSBP is very efficient in solving the JSS and PMJSS problems.
Resumo:
This research deals with an innovative methodology for optimising the coal train scheduling problem. Based on our previously published work, generic solution techniques are developed by utilising a “toolbox” of standard well-solved standard scheduling problems. According to our analysis, the coal train scheduling problem can be basically modelled a Blocking Parallel-Machine Job-Shop Scheduling (BPMJSS) problem with some minor constraints. To construct the feasible train schedules, an innovative constructive algorithm called the SLEK algorithm is proposed. To optimise the train schedule, a three-stage hybrid algorithm called the SLEK-BIH-TS algorithm is developed based on the definition of a sophisticated neighbourhood structure under the mechanism of the Best-Insertion-Heuristic (BIH) algorithm and Tabu Search (TS) metaheuristic algorithm. A case study is performed for optimising a complex real-world coal rail system in Australia. A method to calculate the lower bound of the makespan is proposed to evaluate results. The results indicate that the proposed methodology is promising to find the optimal or near-optimal feasible train timetables of a coal rail system under network and terminal capacity constraints.