857 resultados para Critical plane approach
Resumo:
The sharing of near real-time traceability knowledge in supply chains plays a central role in coordinating business operations and is a key driver for their success. However before traceability datasets received from external partners can be integrated with datasets generated internally within an organisation, they need to be validated against information recorded for the physical goods received as well as against bespoke rules defined to ensure uniformity, consistency and completeness within the supply chain. In this paper, we present a knowledge driven framework for the runtime validation of critical constraints on incoming traceability datasets encapuslated as EPCIS event-based linked pedigrees. Our constraints are defined using SPARQL queries and SPIN rules. We present a novel validation architecture based on the integration of Apache Storm framework for real time, distributed computation with popular Semantic Web/Linked data libraries and exemplify our methodology on an abstraction of the pharmaceutical supply chain.
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2015
Resumo:
We analyze a business model for e-supermarkets to enable multi-product sourcing capacity through co-opetition (collaborative competition). The logistics aspect of our approach is to design and execute a network system where “premium” goods are acquired from vendors at multiple locations in the supply network and delivered to customers. Our specific goals are to: (i) investigate the role of premium product offerings in creating critical mass and profit; (ii) develop a model for the multiple-pickup single-delivery vehicle routing problem in the presence of multiple vendors; and (iii) propose a hybrid solution approach. To solve the problem introduced in this paper, we develop a hybrid metaheuristic approach that uses a Genetic Algorithm for vendor selection and allocation, and a modified savings algorithm for the capacitated VRP with multiple pickup, single delivery and time windows (CVRPMPDTW). The proposed Genetic Algorithm guides the search for optimal vendor pickup location decisions, and for each generated solution in the genetic population, a corresponding CVRPMPDTW is solved using the savings algorithm. We validate our solution approach against published VRPTW solutions and also test our algorithm with Solomon instances modified for CVRPMPDTW.
Resumo:
Mannitol is an essential excipient employed in orally disintegrating tablets due to its high palatability. However its fundamental disadvantage is its fragmentation during direct compression, producing mechanically weak tablets. The primary aim of this study was to assess the fracture behaviour of crystalline mannitol in relation to the energy input during direct compression, utilising ball milling as the method of energy input, whilst assessing tablet characteristics of post-milled powders. Results indicated that crystalline mannitol fractured at the hydrophilic (011) plane, as observed through SEM, alongside a reduction in dispersive surface energy. Disintegration times of post-milled tablets were reduced due to the exposure of the hydrophilic plane, whilst more robust tablets were produced. This was shown through higher tablet hardness and increased plastic deformation profiles of the post-milled powders, as observed with a lower yield pressure through an out-of-die Heckel analysis. Evaluation of crystal state using x-ray diffraction/differential scanning calorimetry showed that mannitol predominantly retained the β-polymorph; however x-ray diffraction provided a novel method to calculate energy input into the powders during ball milling. It can be concluded that particle size reduction is a pragmatic strategy to overcome the current limitation of mannitol fragmentation and provide improvements in tablet properties.
Resumo:
Vehicle-to-Grid (V2G) system with efficient Demand Response Management (DRM) is critical to solve the problem of supplying electricity by utilizing surplus electricity available at EVs. An incentivilized DRM approach is studied to reduce the system cost and maintain the system stability. EVs are motivated with dynamic pricing determined by the group-selling based auction. In the proposed approach, a number of aggregators sit on the first level auction responsible to communicate with a group of EVs. EVs as bidders consider Quality of Energy (QoE) requirements and report interests and decisions on the bidding process coordinated by the associated aggregator. Auction winners are determined based on the bidding prices and the amount of electricity sold by the EV bidders. We investigate the impact of the proposed mechanism on the system performance with maximum feedback power constraints of aggregators. The designed mechanism is proven to have essential economic properties. Simulation results indicate the proposed mechanism can reduce the system cost and offer EVs significant incentives to participate in the V2G DRM operation.
Resumo:
From 1992 to 2012 4.4 billion people were affected by disasters with almost 2 trillion USD in damages and 1.3 million people killed worldwide. The increasing threat of disasters stresses the need to provide solutions for the challenges faced by disaster managers, such as the logistical deployment of resources required to provide relief to victims. The location of emergency facilities, stock prepositioning, evacuation, inventory management, resource allocation, and relief distribution have been identified to directly impact the relief provided to victims during the disaster. Managing appropriately these factors is critical to reduce suffering. Disaster management commonly attracts several organisations working alongside each other and sharing resources to cope with the emergency. Coordinating these agencies is a complex task but there is little research considering multiple organisations, and none actually optimising the number of actors required to avoid shortages and convergence. The aim of the this research is to develop a system for disaster management based on a combination of optimisation techniques and geographical information systems (GIS) to aid multi-organisational decision-making. An integrated decision system was created comprising a cartographic model implemented in GIS to discard floodable facilities, combined with two models focused on optimising the decisions regarding location of emergency facilities, stock prepositioning, the allocation of resources and relief distribution, along with the number of actors required to perform these activities. Three in-depth case studies in Mexico were studied gathering information from different organisations. The cartographic model proved to reduce the risk to select unsuitable facilities. The preparedness and response models showed the capacity to optimise the decisions and the number of organisations required for logistical activities, pointing towards an excess of actors involved in all cases. The system as a whole demonstrated its capacity to provide integrated support for disaster preparedness and response, along with the existence of room for improvement for Mexican organisations in flood management.
Resumo:
Napjainkban egyre nagyobb figyelem fordul a sporttevékenység társadalmi hasznosságára, ugyanakkor számos sportszervezet a túléléséért küzd. A legtöbb sportszervezetnél a sportteljesítmény mérése dominál, s viszonylag kevés figyelmet fordítanak az üzleti teljesítmény és az ezt befolyásoló tényezők mérésére, annak ellenére, hogy a vezetők tudatában vannak a sportteljesítmény és az üzleti teljesítmény kölcsönös összefüggésével. A sportteljesítményt a fogyasztói elégedettségen keresztül bevételre kell váltani, illetve biztosítani kell a finanszírozási forrásokat (gondolhatunk akár a játékosok fizetésére) a megfelelő sportteljesítmény eléréséhez. A sportszervezetek vezetőinek át kell látniuk a sportteljesítmény és az üzleti teljesítmény összefüggéseit, és azonosítaniuk kell a kritikus értékteremtő tényezőket. Ehhez adhat egy megfelelő gondolkodási keretet a Balanced Scorecard alkalmazása. Tanulmányunkban egy rövid áttekintést adunk a Balanced Scorecard alkalmazásának előnyeiről és kihívásairól a non-profit szervezetek esetében, majd egy magyarországi kézilabda klubnál készített esettanulmány segítségével megvizsgáljuk a BSC alkalmazási lehetőségét a sportkluboknál. _____ While performance measurement in the sport industry has a traditionally strong focus on sports results, commercial success of sports clubs needs to gain more ground. Sports results should generate market revenues, by satisfying customer needs, and allow continued investment in the further improvement of sports success (i.e. more funds for player transfers and wages). Club managers need to understand the complex relationship between on-field and off-the-field success, and identify critical success factors for achieving strategic objectives. The Balanced Scorecard approach provides a plausible framework for such analysis. Our paper explains the challenges of and opportunities for implementing a Balanced Scorecard system in non-profit organisations, and provides insights into its application in professional sport through an in-depth case study of a handball club in Hungary. We conclude by providing a model for managing sports organisations in line with strategic objectives, balancing out stakeholder expectations for both sports results and commercial success.
Resumo:
The tragic events of September 11th ushered a new era of unprecedented challenges. Our nation has to be protected from the alarming threats of adversaries. These threats exploit the nation's critical infrastructures affecting all sectors of the economy. There is the need for pervasive monitoring and decentralized control of the nation's critical infrastructures. The communications needs of monitoring and control of critical infrastructures was traditionally catered for by wired communication systems. These technologies ensured high reliability and bandwidth but are however very expensive, inflexible and do not support mobility and pervasive monitoring. The communication protocols are Ethernet-based that used contention access protocols which results in high unsuccessful transmission and delay. An emerging class of wireless networks, named embedded wireless sensor and actuator networks has potential benefits for real-time monitoring and control of critical infrastructures. The use of embedded wireless networks for monitoring and control of critical infrastructures requires secure, reliable and timely exchange of information among controllers, distributed sensors and actuators. The exchange of information is over shared wireless media. However, wireless media is highly unpredictable due to path loss, shadow fading and ambient noise. Monitoring and control applications have stringent requirements on reliability, delay and security. The primary issue addressed in this dissertation is the impact of wireless media in harsh industrial environment on the reliable and timely delivery of critical data. In the first part of the dissertation, a combined networking and information theoretic approach was adopted to determine the transmit power required to maintain a minimum wireless channel capacity for reliable data transmission. The second part described a channel-aware scheduling scheme that ensured efficient utilization of the wireless link and guaranteed delay. Various analytical evaluations and simulations are used to evaluate and validate the feasibility of the methodologies and demonstrate that the protocols achieved reliable and real-time data delivery in wireless industrial networks.
Resumo:
The span of control is the most discussed single concept in classical and modern management theory. In specifying conditions for organizational effectiveness, the span of control has generally been regarded as a critical factor. Existing research work has focused mainly on qualitative methods to analyze this concept, for example heuristic rules based on experiences and/or intuition. This research takes a quantitative approach to this problem and formulates it as a binary integer model, which is used as a tool to study the organizational design issue. This model considers a range of requirements affecting management and supervision of a given set of jobs in a company. These decision variables include allocation of jobs to workers, considering complexity and compatibility of each job with respect to workers, and the requirement of management for planning, execution, training, and control activities in a hierarchical organization. The objective of the model is minimal operations cost, which is the sum of supervision costs at each level of the hierarchy, and the costs of workers assigned to jobs. The model is intended for application in the make-to-order industries as a design tool. It could also be applied to make-to-stock companies as an evaluation tool, to assess the optimality of their current organizational structure. Extensive experiments were conducted to validate the model, to study its behavior, and to evaluate the impact of changing parameters with practical problems. This research proposes a meta-heuristic approach to solving large-size problems, based on the concept of greedy algorithms and the Meta-RaPS algorithm. The proposed heuristic was evaluated with two measures of performance: solution quality and computational speed. The quality is assessed by comparing the obtained objective function value to the one achieved by the optimal solution. The computational efficiency is assessed by comparing the computer time used by the proposed heuristic to the time taken by a commercial software system. Test results show the proposed heuristic procedure generates good solutions in a time-efficient manner.
Resumo:
This dissertation develops an image processing framework with unique feature extraction and similarity measurements for human face recognition in the thermal mid-wave infrared portion of the electromagnetic spectrum. The goals of this research is to design specialized algorithms that would extract facial vasculature information, create a thermal facial signature and identify the individual. The objective is to use such findings in support of a biometrics system for human identification with a high degree of accuracy and a high degree of reliability. This last assertion is due to the minimal to no risk for potential alteration of the intrinsic physiological characteristics seen through thermal infrared imaging. The proposed thermal facial signature recognition is fully integrated and consolidates the main and critical steps of feature extraction, registration, matching through similarity measures, and validation through testing our algorithm on a database, referred to as C-X1, provided by the Computer Vision Research Laboratory at the University of Notre Dame. Feature extraction was accomplished by first registering the infrared images to a reference image using the functional MRI of the Brain’s (FMRIB’s) Linear Image Registration Tool (FLIRT) modified to suit thermal infrared images. This was followed by segmentation of the facial region using an advanced localized contouring algorithm applied on anisotropically diffused thermal images. Thermal feature extraction from facial images was attained by performing morphological operations such as opening and top-hat segmentation to yield thermal signatures for each subject. Four thermal images taken over a period of six months were used to generate thermal signatures and a thermal template for each subject, the thermal template contains only the most prevalent and consistent features. Finally a similarity measure technique was used to match signatures to templates and the Principal Component Analysis (PCA) was used to validate the results of the matching process. Thirteen subjects were used for testing the developed technique on an in-house thermal imaging system. The matching using an Euclidean-based similarity measure showed 88% accuracy in the case of skeletonized signatures and templates, we obtained 90% accuracy for anisotropically diffused signatures and templates. We also employed the Manhattan-based similarity measure and obtained an accuracy of 90.39% for skeletonized and diffused templates and signatures. It was found that an average 18.9% improvement in the similarity measure was obtained when using diffused templates. The Euclidean- and Manhattan-based similarity measure was also applied to skeletonized signatures and templates of 25 subjects in the C-X1 database. The highly accurate results obtained in the matching process along with the generalized design process clearly demonstrate the ability of the thermal infrared system to be used on other thermal imaging based systems and related databases. A novel user-initialization registration of thermal facial images has been successfully implemented. Furthermore, the novel approach at developing a thermal signature template using four images taken at various times ensured that unforeseen changes in the vasculature did not affect the biometric matching process as it relied on consistent thermal features.
Resumo:
How might education professors disrupt traditional curriculum and teaching practices that teach future teachers to label, segregate, and marginalize students with disabilities? The Disability Studies in Education (DSE) approach grounds practice on the perspectives of people with disabilities and challenges practices that isolate and de-humanize individuals. The pedagogy for eliciting critical book reviews using a DSE perspective is described.
Resumo:
For the last three decades, the Capital Asset Pricing Model (CAPM) has been a dominant model to calculate expected return. In early 1990% Fama and French (1992) developed the Fama and French Three Factor model by adding two additional factors to the CAPM. However even with these present models, it has been found that estimates of the expected return are not accurate (Elton, 1999; Fama &French, 1997). Botosan (1997) introduced a new approach to estimate the expected return. This approach employs an equity valuation model to calculate the internal rate of return (IRR) which is often called, 'implied cost of equity capital" as a proxy of the expected return. This approach has been gaining in popularity among researchers. A critical review of the literature will help inform hospitality researchers regarding the issue and encourage them to implement the new approach into their own studies.
Resumo:
As researchers and practitioners move towards a vision of software systems that configure, optimize, protect, and heal themselves, they must also consider the implications of such self-management activities on software reliability. Autonomic computing (AC) describes a new generation of software systems that are characterized by dynamically adaptive self-management features. During dynamic adaptation, autonomic systems modify their own structure and/or behavior in response to environmental changes. Adaptation can result in new system configurations and capabilities, which need to be validated at runtime to prevent costly system failures. However, although the pioneers of AC recognize that validating autonomic systems is critical to the success of the paradigm, the architectural blueprint for AC does not provide a workflow or supporting design models for runtime testing. ^ This dissertation presents a novel approach for seamlessly integrating runtime testing into autonomic software. The approach introduces an implicit self-test feature into autonomic software by tailoring the existing self-management infrastructure to runtime testing. Autonomic self-testing facilitates activities such as test execution, code coverage analysis, timed test performance, and post-test evaluation. In addition, the approach is supported by automated testing tools, and a detailed design methodology. A case study that incorporates self-testing into three autonomic applications is also presented. The findings of the study reveal that autonomic self-testing provides a flexible approach for building safe, reliable autonomic software, while limiting the development and performance overhead through software reuse. ^
Resumo:
The purpose of this mixed methods study was to understand physics Learning Assistants' (LAs) views on reflective teaching, expertise in teaching, and LA program teaching experience and to determine if views predicted level of reflection evident in writing. Interviews were conducted in Phase One, Q methodology was used in Phase Two, and level of reflection in participants' writing was assessed using a rubric based on Hatton and Smith's (1995) "Criteria for the Recognition of Evidence for Different Types of Reflective Writing" in Phase Three. Interview analysis revealed varying perspectives on content knowledge, pedagogical knowledge, and experience in relation to expertise in teaching. Participants revealed that they engaged in reflection on their teaching, believed reflection helps teachers improve, and found peer reflection beneficial. Participants believed teaching experience in the LA program provided preparation for teaching, but that more preparation was needed to teach. Three typologies emerged in Phase Two. Type One LAs found participation in the LA program rewarding and believed expertise in teaching does not require expertise in content or pedagogy, but it develops over time from reflection. Type Two LAs valued reflection, but not writing reflections, felt the LA program teaching experience helped them decide on non-teaching careers and helped them confront gaps in their physics knowledge. Type Three LAs valued reflection, believed expertise in content and pedagogy are necessary for expert teaching, and felt LA program teaching experience increased their likelihood of becoming teachers, but did not prepare them for teaching. Writing assignments submitted in Phase Three were categorized as 19% descriptive writing, 60% descriptive reflections, and 21% dialogic reflections. No assignments were categorized as critical reflection. Using ordinal logistic regression, typologies that emerged in Phase Two were not found to be predictors for the level of reflection evident in the writing assignments. In conclusion, viewpoints of physics LAs were revealed, typologies among them were discovered, and their writing gave evidence of their ability to reflect on teaching. These findings may benefit faculty and staff in the LA program by helping them better understand the views of physics LAs and how to assess their various forms of reflection.
Resumo:
Long-span bridges are flexible and therefore are sensitive to wind induced effects. One way to improve the stability of long span bridges against flutter is to use cross-sections that involve twin side-by-side decks. However, this can amplify responses due to vortex induced oscillations. Wind tunnel testing is a well-established practice to evaluate the stability of bridges against wind loads. In order to study the response of the prototype in laboratory, dynamic similarity requirements should be satisfied. One of the parameters that is normally violated in wind tunnel testing is Reynolds number. In this dissertation, the effects of Reynolds number on the aerodynamics of a double deck bridge were evaluated by measuring fluctuating forces on a motionless sectional model of a bridge at different wind speeds representing different Reynolds regimes. Also, the efficacy of vortex mitigation devices was evaluated at different Reynolds number regimes. One other parameter that is frequently ignored in wind tunnel studies is the correct simulation of turbulence characteristics. Due to the difficulties in simulating flow with large turbulence length scale on a sectional model, wind tunnel tests are often performed in smooth flow as a conservative approach. The validity of simplifying assumptions in calculation of buffeting loads, as the direct impact of turbulence, needs to be verified for twin deck bridges. The effects of turbulence characteristics were investigated by testing sectional models of a twin deck bridge under two different turbulent flow conditions. Not only the flow properties play an important role on the aerodynamic response of the bridge, but also the geometry of the cross section shape is expected to have significant effects. In this dissertation, the effects of deck details, such as width of the gap between the twin decks, and traffic barriers on the aerodynamic characteristics of a twin deck bridge were investigated, particularly on the vortex shedding forces with the aim of clarifying how these shape details can alter the wind induced responses. Finally, a summary of the issues that are involved in designing a dynamic test rig for high Reynolds number tests is given, using the studied cross section as an example.