906 resultados para Object-Oriented Programming
Resumo:
Corporate executives require relevant and intelligent business information in real-time to take strategic decisions. They require the freedom to access this information anywhere and anytime. There is a need to extend this functionality beyond the office and on the fingertips of the decision makers. Mobile Business Intelligence Tool (MBIT) aims to provide these features in a flexible and cost-efficient manner. This paper describes the detailed architecture of MBIT to overcome the limitations of existing mobile business intelligence tools. Further, a detailed implementation framework is presented to realize the design. This research highlights the benefits of using service oriented architecture to design flexible and platform independent mobile business applications. © 2009 IEEE.
Resumo:
Oriented, single-crystalline, one-dimensional (1D) TiO2 nanostructures would be most desirable for providing fascinating properties and features, such as high electron mobility or quantum confinement effects, high specific surface area, and even high mechanical strength, but achieving these structures has been limited by the availability of synthetic techniques. In this study, a concept for precisely controlling the morphology of 1D TiO2 nanostructures by tuning the hydrolysis rate of titanium precursors is proposed. Based on this innovation, oriented 1D rutile TiO2 nanostructure arrays with continually adjustable morphologies, from nanorods (NRODs) to nanoribbons (NRIBs), and then nanowires (NWs), as well as the transient state morphologies, were successfully synthesized. The proposed method is a significant finding in terms of controlling the morphology of the 1D TiO2 nano-architectures, which leads to significant changes in their band structures. It is worth noting that the synthesized rutile NRIBs and NWs have a comparable bandgap and conduction band edge height to those of the anatase phase, which in turn enhances their photochemical activity. In photovoltaic performance tests, the photoanode constructed from the oriented NRIB arrays possesses not only a high surface area for sufficient dye loading and better light scattering in the visible light range than for the other morphologies, but also a wider bandgap and higher conduction band edge, with more than 200% improvement in power conversion efficiency in dye-sensitized solar cells (DSCs) compared with NROD morphology.
Resumo:
We present a new method for establishing correlation between deuterium and its attached carbon in a deuterated liquid crystal. The method is based on transfer of polarization using the DAPT pulse sequence proposed originally for two spin half nuclei, now extended to a spin-1 and a spin-1/2 nuclei. DAPT utilizes the evolution of magnetization of the spin pair under two blocks of phase shifted BLEW-12 pulses on one of the spins separated by a 90 degree pulse on the other spin. The method is easy to implement and does not need to satisfy matching conditions unlike the Hartmann-Hahn cross-polarization. Experimental results presented demonstrate the efficacy of the method.
Resumo:
A nonlinear suboptimal guidance scheme is developed for the reentry phase of the reusable launch vehicles. A recently developed methodology, named as model predictive static programming (MPSP), is implemented which combines the philosophies of nonlinear model predictive control theory and approximate dynamic programming. This technique provides a finite time nonlinear suboptimal guidance law which leads to a rapid solution of the guidance history update. It does not have to suffer from computational difficulties and can be implemented online. The system dynamics is propagated through the flight corridor to the end of the reentry phase considering energy as independent variable and angle of attack as the active control variable. All the terminal constraints are satisfied. Among the path constraints, the normal load is found to be very constrictive. Hence, an extra effort has been made to keep the normal load within a specified limit and monitoring its sensitivity to the perturbation.
Resumo:
We present a motion detection algorithm which detects direction of motion at sufficient number of points and thus segregates the edge image into clusters of coherently moving points. Unlike most algorithms for motion analysis, we do not estimate magnitude of velocity vectors or obtain dense motion maps. The motivation is that motion direction information at a number of points seems to be sufficient to evoke perception of motion and hence should be useful in many image processing tasks requiring motion analysis. The algorithm essentially updates the motion at previous time using the current image frame as input in a dynamic fashion. One of the novel features of the algorithm is the use of some feedback mechanism for evidence segregation. This kind of motion analysis can identify regions in the image that are moving together coherently, and such information could be sufficient for many applications that utilize motion such as segmentation, compression, and tracking. We present an algorithm for tracking objects using our motion information to demonstrate the potential of this motion detection algorithm.
Resumo:
Stationary processes are random variables whose value is a signal and whose distribution is invariant to translation in the domain of the signal. They are intimately connected to convolution, and therefore to the Fourier transform, since the covariance matrix of a stationary process is a Toeplitz matrix, and Toeplitz matrices are the expression of convolution as a linear operator. This thesis utilises this connection in the study of i) efficient training algorithms for object detection and ii) trajectory-based non-rigid structure-from-motion.
Resumo:
This paper investigates the use of Genetic Programming (GP) to create an approximate model for the non-linear relationship between flexural stiffness, length, mass per unit length and rotation speed associated with rotating beams and their natural frequencies. GP, a relatively new form of artificial intelligence, is derived from the Darwinian concept of evolution and genetics and it creates computer programs to solve problems by manipulating their tree structures. GP predicts the size and structural complexity of the empirical model by minimizing the mean square error at the specified points of input-output relationship dataset. This dataset is generated using a finite element model. The validity of the GP-generated model is tested by comparing the natural frequencies at training and at additional input data points. It is found that by using a non-dimensional stiffness, it is possible to get simple and accurate function approximation for the natural frequency. This function approximation model is then used to study the relationships between natural frequency and various influencing parameters for uniform and tapered beams. The relations obtained with GP model agree well with FEM results and can be used for preliminary design and structural optimization studies.
Resumo:
Visual tracking has been a challenging problem in computer vision over the decades. The applications of Visual Tracking are far-reaching, ranging from surveillance and monitoring to smart rooms. Mean-shift (MS) tracker, which gained more attention recently, is known for tracking objects in a cluttered environment and its low computational complexity. The major problem encountered in histogram-based MS is its inability to track rapidly moving objects. In order to track fast moving objects, we propose a new robust mean-shift tracker that uses both spatial similarity measure and color histogram-based similarity measure. The inability of MS tracker to handle large displacements is circumvented by the spatial similarity-based tracking module, which lacks robustness to object's appearance change. The performance of the proposed tracker is better than the individual trackers for tracking fast-moving objects with better accuracy.
Resumo:
We compared student performance on large-scale take-home assignments and small-scale invigilated tests that require competency with exactly the same programming concepts. The purpose of the tests, which were carried out soon after the take home assignments were submitted, was to validate the students' assignments as individual work. We found widespread discrepancies between the marks achieved by students between the two types of tasks. Many students were able to achieve a much higher grade on the take-home assignments than the invigilated tests. We conclude that these paired assessments are an effective way to quickly identify students who are still struggling with programming concepts that we might otherwise assume they understand, given their ability to complete similar, yet more complicated, tasks in their own time. We classify these students as not yet being at the neo-Piagetian stage of concrete operational reasoning.
Resumo:
Folded Dynamic Programming (FDP) is adopted for developing optimalnreservoir operation policies for flood control. It is applied to a case study of Hirakud Reservoir in Mahanadi basin, India with the objective of deriving optimal policy for flood control. The river flows down to Naraj, the head of delta where a major city is located and finally joins the Bay of Bengal. As Hirakud reservoir is on the upstream side of delta area in the basin, it plays an important role in alleviating the severity of the flood for this area. Data of 68 floods such as peaks of inflow hydrograph, peak of outflow from reservoir during each flood, peak of flow hydrograph at Naraj and d/s catchment contribution are utilized. The combinations of 51, 54, 57 thousand cumecs as peak inflow into reservoir and 25.5, 20, 14 thousand cumecs respectively as,peak d/s catchment contribution form the critical combinations for flood situation. It is observed that the combination of 57 thousand cumecs of inflow into reservoir and 14 thousand cumecs for d/s catchment contribution is the most critical among the critical combinations of flow series. The method proposed can be extended to similar situations for deriving reservoir operating policies for flood control.
Resumo:
Because of the bottlenecking operations in a complex coal rail system, millions of dollars are costed by mining companies. To handle this issue, this paper investigates a real-world coal rail system and aims to optimise the coal railing operations under constraints of limited resources (e.g., limited number of locomotives and wagons). In the literature, most studies considered the train scheduling problem on a single-track railway network to be strongly NP-hard and thus developed metaheuristics as the main solution methods. In this paper, a new mathematical programming model is formulated and coded by optimization programming language based on a constraint programming (CP) approach. A new depth-first-search technique is developed and embedded inside the CP model to obtain the optimised coal railing timetable efficiently. Computational experiments demonstrate that high-quality solutions are obtainable in industry-scale applications. To provide insightful decisions, sensitivity analysis is conducted in terms of different scenarios and specific criteria. Keywords Train scheduling · Rail transportation · Coal mining · Constraint programming
Resumo:
Separated local field (SLF) spectroscopy is a powerful technique to measure heteronuclear dipolar couplings. The method provides site-specific dipolar couplings for oriented samples such as membrane proteins oriented in lipid bilayers and liquid crystals. A majority of the SLF techniques utilize the well-known Polarization Inversion Spin Exchange at Magic Angle (PISEMA) pulse scheme which employs spin exchange at the magic angle under Hartmann-Hahn match. Though PISEMA provides a relatively large scaling factor for the heteronuclear dipolar coupling and a better resolution along the dipolar dimension, it has a few shortcomings. One of the major problems with PISEMA is that the sequence is very much sensitive to proton carrier offset and the measured dipolar coupling changes dramatically with the change in the carrier frequency. The study presented here focuses on modified PISEMA sequences which are relatively insensitive to proton offsets over a large range. In the proposed sequences, the proton magnetization is cycled through two quadrants while the effective field is cycled through either two or four quadrants. The modified sequences have been named as 2(n)-SEMA where n represents the number of quadrants the effective field is cycled through. Experiments carried out on a liquid crystal and a single crystal of a model peptide demonstrate the usefulness of the modified sequences. A systematic study under various offsets and Hartmann-Hahn mismatch conditions has been carried out and the performance is compared with PISEMA under similar conditions.
Resumo:
On the one hand this thesis attempts to develop and empirically test an ethically defensible theorization of the relationship between human resource management (HRM) and competitive advantage. The specific empirical evidence indicates that at least part of HRM's causal influence on employee performance may operate indirectly through a social architecture and then through psychological empowerment. However, in particular the evidence concerning a potential influence of HRM on organizational performance seems to put in question some of the rhetorics within the HRM research community. On the other hand, the thesis tries to explicate and defend a certain attitude towards the philosophically oriented debates within organization science. This involves suggestions as to how we should understand meaning, reference, truth, justification and knowledge. In this understanding it is not fruitful to see either the problems or the solutions to the problems of empirical social science as fundamentally philosophical ones. It is argued that the notorious problems of social science, in this thesis exemplified by research on HRM, can be seen as related to dynamic complexity in combination with both the ethical and pragmatic difficulty of ”laboratory-like-experiments”. Solutions … can only be sought by informed trials and errors depending on the perceived familiarity with the object(s) of research. The odds are against anybody who hopes for clearly adequate social scientific answers to more complex questions. Social science is in particular unlikely to arrive at largely accepted knowledge of the kind ”if we do this, then that will happen”, or even ”if we do this, then that is likely to happen”. One of the problems probably facing most of the social scientific research communities is to specify and agree upon the ”this ” and the ”that” and provide convincing evidence of how they are (causally) related. On most more complex questions the role of social science seems largely to remain that of contributing to a (critical) conversation, rather than to arrive at more generally accepted knowledge. This is ultimately what is both argued and, in a sense, demonstrated using research on the relationship between HRM and organizational performance as an example.
Resumo:
Non-uniform sampling of a signal is formulated as an optimization problem which minimizes the reconstruction signal error. Dynamic programming (DP) has been used to solve this problem efficiently for a finite duration signal. Further, the optimum samples are quantized to realize a speech coder. The quantizer and the DP based optimum search for non-uniform samples (DP-NUS) can be combined in a closed-loop manner, which provides distinct advantage over the open-loop formulation. The DP-NUS formulation provides a useful control over the trade-off between bitrate and performance (reconstruction error). It is shown that 5-10 dB SNR improvement is possible using DP-NUS compared to extrema sampling approach. In addition, the close-loop DP-NUS gives a 4-5 dB improvement in reconstruction error.