985 resultados para self-deployment algorithms
Resumo:
The paper explores the functionalities of eight start pages and considers their usefulness when used as a mashable platform for deployment of personal learning environments (PLE) for self-organized learners. The Web 2.0 effects and eLearning 2.0 strategies are examined from the point of view of how they influence the methods of gathering and capturing data, information and knowledge, and the learning process. Mashup technology is studied in order to see what kind of components can be used in PLE realization. A model of a PLE for self-organized learners is developed and it is used to prototype a personal learning and research environment in the start pages Netvibes, Pageflakes and iGoogle.
Resumo:
Smart cameras perform on-board image analysis, adapt their algorithms to changes in their environment, and collaborate with other networked cameras to analyze the dynamic behavior of objects. A proposed computational framework adopts the concepts of self-awareness and self-expression to more efficiently manage the complex tradeoffs among performance, flexibility, resources, and reliability. The Web extra at http://youtu.be/NKe31-OKLz4 is a video demonstrating CamSim, a smart camera simulation tool, enables users to test self-adaptive and self-organizing smart-camera techniques without deploying a smart-camera network.
Resumo:
This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.
Resumo:
Numerical optimization is a technique where a computer is used to explore design parameter combinations to find extremes in performance factors. In multi-objective optimization several performance factors can be optimized simultaneously. The solution to multi-objective optimization problems is not a single design, but a family of optimized designs referred to as the Pareto frontier. The Pareto frontier is a trade-off curve in the objective function space composed of solutions where performance in one objective function is traded for performance in others. A Multi-Objective Hybridized Optimizer (MOHO) was created for the purpose of solving multi-objective optimization problems by utilizing a set of constituent optimization algorithms. MOHO tracks the progress of the Pareto frontier approximation development and automatically switches amongst those constituent evolutionary optimization algorithms to speed the formation of an accurate Pareto frontier approximation. Aerodynamic shape optimization is one of the oldest applications of numerical optimization. MOHO was used to perform shape optimization on a 0.5-inch ballistic penetrator traveling at Mach number 2.5. Two objectives were simultaneously optimized: minimize aerodynamic drag and maximize penetrator volume. This problem was solved twice. The first time the problem was solved by using Modified Newton Impact Theory (MNIT) to determine the pressure drag on the penetrator. In the second solution, a Parabolized Navier-Stokes (PNS) solver that includes viscosity was used to evaluate the drag on the penetrator. The studies show the difference in the optimized penetrator shapes when viscosity is absent and present in the optimization. In modern optimization problems, objective function evaluations may require many hours on a computer cluster to perform these types of analysis. One solution is to create a response surface that models the behavior of the objective function. Once enough data about the behavior of the objective function has been collected, a response surface can be used to represent the actual objective function in the optimization process. The Hybrid Self-Organizing Response Surface Method (HYBSORSM) algorithm was developed and used to make response surfaces of objective functions. HYBSORSM was evaluated using a suite of 295 non-linear functions. These functions involve from 2 to 100 variables demonstrating robustness and accuracy of HYBSORSM.
Resumo:
The underrepresentation of women in physics has been well documented and a source of concern for both policy makers and educators. My dissertation focuses on understanding the role self-efficacy plays in retaining students, particularly women, in introductory physics. I use an explanatory mixed methods approach to first investigate quantitatively the influence of self-efficacy in predicting success and then to qualitatively explore the development of self-efficacy. In the initial quantitative studies, I explore the utility of self-efficacy in predicting the success of introductory physics students, both women and men. Results indicate that self-efficacy is a significant predictor of success for all students. I then disaggregate the data to examine how self-efficacy develops differently for women and men in the introductory physics course. Results show women rely on different sources of self-efficacy than do men, and that a particular instructional environment, Modeling Instruction, has a positive impact on these sources of self-efficacy. In the qualitative phase of the project, this dissertation focuses on the development of self-efficacy. Using the qualitative tool of microanalysis, I introduce a methodology for understanding how self-efficacy develops moment-by-moment using the lens of self-efficacy opportunities. I then use the characterizations of self-efficacy opportunities to focus on a particular course environment and to identify and describe a mechanism by which Modeling Instruction impacts student self-efficacy. Results indicate that the emphasizing the development and deployment of models affords opportunities to impact self-efficacy. The findings of this dissertation indicate that introducing key elements into the classroom, such as cooperative group work, model development and deployment, and interaction with the instructor, create a mechanism by which instructors can impact the self-efficacy of their students. Results from this study indicate that creating a model to impact the retention rates of women in physics should include attending to self-efficacy and designing activities in the classroom that create self-efficacy opportunities.
Resumo:
Ten-month time series of mean volume backscattering strength (MVBS) and vertical velocity obtained from three moored acoustic Doppler current profilers (ADCPs) deployed from February until December 2005 at 64°S, 66.5°S and 69°S along the Greenwich Meridian were used to analyse the diel vertical zooplankton migration (DVM) and its seasonality and regional variability in the Lazarev Sea. The estimated MVBS exhibited distinct patterns of DVM at all three mooring sites. Between February and October, the timing of the DVM and the residence time of zooplankton at depth were clearly governed by the day-night rhythm. Mean daily cycles of the ADCP-derived vertical velocity were calculated for successive months and showed maximum ascent and descent velocities of 16 and -15 mm/s. However, a change of the MVBS pattern occurred in late spring/early austral summer (October/November), when the zooplankton communities ceased their synchronous vertical migration at all three mooring sites. Elevated MVBS values were then concentrated in the uppermost layers (<50 m) at 66.5°S. This period coincided with the decay of sea ice coverage at 64°S and 66.5°S between early November and mid-December. Elevated chlorophyll concentrations, which were measured at the end of the deployment, extended from 67°S to 65°S and indicated a phytoplankton bloom in the upper 50 m. Thus, we propose that the increased food supply associated with an ice edge bloom caused the zooplankton communities to cease their DVM in favour of feeding.
Resumo:
Background: Depression is the largest contributing factor to years lost to disability, and symptom remission does not always result in functional improvement. Comprehensive analysis of functioning requires investigation both of the competence to perform behaviours, as well as actual performance in the real world. Further, two independent domains of functioning have been proposed: adaptive (behaviours conducive to daily living skills and independent functioning) and interpersonal (behaviours conducive to the successful initiation and maintenance of social relationships). To date, very little is known about the relationship between these constructs in depression, and the factors that may play a key role in the disparity between competence and real-world performance in adaptive and interpersonal functioning. Purpose: This study used a multidimensional (adaptive and interpersonal functioning), multi-level (competence and performance) approach to explore the potential discrepancy between competence and real-world performance in depression, specifically investigating whether self-efficacy (one’s beliefs of their capability to perform particular actions) predicts depressed individuals’ underperformance in the real world relative to their ability. A comparison sample of healthy participants was included to investigate the level of depressed individuals’ impairment, across variables, relative to healthy individuals. Method: Forty-two participants with depression and twenty healthy participants without history of, or current, psychiatric illness were recruited in the Kingston, Ontario community. Competence, self-efficacy, and real-world functioning all in both adaptive and interpersonal domains, and symptoms were assessed during a single-visit assessment. Results: Relative to healthy individuals, depressed individuals showed significantly poorer adaptive and interpersonal competence, adaptive and interpersonal functioning, and significantly lower self-efficacy for adaptive and interpersonal behaviours. Self-efficacy significantly predicted functional disability both in the domain of adaptive and interpersonal functioning. Interpersonal self-efficacy accounted for significant variance in the discrepancy between interpersonal competence and functioning. Conclusions: The current study provides the first data regarding relationships among competence, functioning, and self-efficacy in depression. Self-efficacy may play an important role in the deployment of functional skills in everyday life. This has implications for therapeutic interventions aimed at enhancing depressed individuals’ engagement in functional activities. There may be additional intrinsic or extrinsic factors that influence the relationships among competence and functioning in depression.
Resumo:
The Mobile Network Optimization (MNO) technologies have advanced at a tremendous pace in recent years. And the Dynamic Network Optimization (DNO) concept emerged years ago, aimed to continuously optimize the network in response to variations in network traffic and conditions. Yet, DNO development is still at its infancy, mainly hindered by a significant bottleneck of the lengthy optimization runtime. This paper identifies parallelism in greedy MNO algorithms and presents an advanced distributed parallel solution. The solution is designed, implemented and applied to real-life projects whose results yield a significant, highly scalable and nearly linear speedup up to 6.9 and 14.5 on distributed 8-core and 16-core systems respectively. Meanwhile, optimization outputs exhibit self-consistency and high precision compared to their sequential counterpart. This is a milestone in realizing the DNO. Further, the techniques may be applied to similar greedy optimization algorithm based applications.
Resumo:
The dendritic cell algorithm (DCA) is an immune-inspired algorithm, developed for the purpose of anomaly detection. The algorithm performs multi-sensor data fusion and correlation which results in a ‘context aware’ detection system. Previous applications of the DCA have included the detection of potentially malicious port scanning activity, where it has produced high rates of true positives and low rates of false positives. In this work we aim to compare the performance of the DCA and of a self-organizing map (SOM) when applied to the detection of SYN port scans, through experimental analysis. A SOM is an ideal candidate for comparison as it shares similarities with the DCA in terms of the data fusion method employed. It is shown that the results of the two systems are comparable, and both produce false positives for the same processes. This shows that the DCA can produce anomaly detection results to the same standard as an established technique.
Resumo:
Nowadays, Power grids are critical infrastructures on which everything else relies, and their correct behavior is of the highest priority. New smart devices are being deployed to be able to manage and control power grids more efficiently and avoid instability. However, the deployment of such smart devices like Phasor Measurement Units (PMU) and Phasor Data Concentrators (PDC), open new opportunities for cyber attackers to exploit network vulnerabilities. If a PDC is compromised, all data coming from PMUs to that PDC is lost, reducing network observability. Our approach to solve this problem is to develop an Intrusion detection System (IDS) in a Software-defined network (SDN). allowing the IDS system to detect compromised devices and use that information as an input for a self-healing SDN controller, which redirects the data of the PMUs to a new, uncompromised PDC, maintaining the maximum possible network observability at every moment. During this research, we have successfully implemented Self-healing in an example network with an SDN controller based on Ryu controller. We have also assessed intrinsic vulnerabilities of Wide Area Management Systems (WAMS) and SCADA networks, and developed some rules for the Intrusion Detection system which specifically protect vulnerabilities of these networks. The integration of the IDS and the SDN controller was also successful. \\To achieve this goal, the first steps will be to implement an existing Self-healing SDN controller and assess intrinsic vulnerabilities of Wide Area Measurement Systems (WAMS) and SCADA networks. After that, we will integrate the Ryu controller with Snort, and create the Snort rules that are specific for SCADA or WAMS systems and protocols.
Resumo:
Dissertação de Mestrado, Engenharia Informática, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2015
Resumo:
The paper presents an investigation of fix-referenced and self-referenced wave energy converters and a comparison of their corresponding wave energy conversion capacities from real seas. For conducting the comparisons, two popular wave energy converters, point absorber and oscillating water column, and their power conversion capacities in the fixed-referenced and self-referenced forms have been numerically studied and compared. In the numerical models, the deviceâ s power extractions from seas are maximized using the correspondingly optimized power take-offs in different sea states, thus their power conversion capacities can be calculated and compared. From the comparisons and analyses, it is shown that the energy conversion capacities of the self-referenced devices can be significantly increased if the motions of the device itself can be utilized for wave energy conversion; and the self-referenced devices can be possibly designed to be compliant in long waves, which could be a very beneficial factor for device survivability in the extreme wave conditions (normally long waves). In this regards, the self-referenced WECs (wave energy converters) may be better options in terms of wave energy conversion from the targeted waves in seas (frequently the most occurred), and in terms of the device survivability, especially in the extreme waves when compared to the fix-referenced counterparts.
Resumo:
A densely built environment is a complex system of infrastructure, nature, and people closely interconnected and interacting. Vehicles, public transport, weather action, and sports activities constitute a manifold set of excitation and degradation sources for civil structures. In this context, operators should consider different factors in a holistic approach for assessing the structural health state. Vibration-based structural health monitoring (SHM) has demonstrated great potential as a decision-supporting tool to schedule maintenance interventions. However, most excitation sources are considered an issue for practical SHM applications since traditional methods are typically based on strict assumptions on input stationarity. Last-generation low-cost sensors present limitations related to a modest sensitivity and high noise floor compared to traditional instrumentation. If these devices are used for SHM in urban scenarios, short vibration recordings collected during high-intensity events and vehicle passage may be the only available datasets with a sufficient signal-to-noise ratio. While researchers have spent efforts to mitigate the effects of short-term phenomena in vibration-based SHM, the ultimate goal of this thesis is to exploit them and obtain valuable information on the structural health state. First, this thesis proposes strategies and algorithms for smart sensors operating individually or in a distributed computing framework to identify damage-sensitive features based on instantaneous modal parameters and influence lines. Ordinary traffic and people activities become essential sources of excitation, while human-powered vehicles, instrumented with smartphones, take the role of roving sensors in crowdsourced monitoring strategies. The technical and computational apparatus is optimized using in-memory computing technologies. Moreover, identifying additional local features can be particularly useful to support the damage assessment of complex structures. Thereby, smart coatings are studied to enable the self-sensing properties of ordinary structural elements. In this context, a machine-learning-aided tomography method is proposed to interpret the data provided by a nanocomposite paint interrogated electrically.
Resumo:
The main objective of my thesis work is to exploit the Google native and open-source platform Kubeflow, specifically using Kubeflow pipelines, to execute a Federated Learning scalable ML process in a 5G-like and simplified test architecture hosting a Kubernetes cluster and apply the largely adopted FedAVG algorithm and FedProx its optimization empowered by the ML platform ‘s abilities to ease the development and production cycle of this specific FL process. FL algorithms are more are and more promising and adopted both in Cloud application development and 5G communication enhancement through data coming from the monitoring of the underlying telco infrastructure and execution of training and data aggregation at edge nodes to optimize the global model of the algorithm ( that could be used for example for resource provisioning to reach an agreed QoS for the underlying network slice) and after a study and a research over the available papers and scientific articles related to FL with the help of the CTTC that suggests me to study and use Kubeflow to bear the algorithm we found out that this approach for the whole FL cycle deployment was not documented and may be interesting to investigate more in depth. This study may lead to prove the efficiency of the Kubeflow platform itself for this need of development of new FL algorithms that will support new Applications and especially test the FedAVG algorithm performances in a simulated client to cloud communication using a MNIST dataset for FL as benchmark.