974 resultados para Point method


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Most parametric software cost estimation models used today evolved in the late 70's and early 80's. At that time, the dominant software development techniques being used were the early 'structured methods'. Since then, several new systems development paradigms and methods have emerged, one being Jackson Systems Development (JSD). As current cost estimating methods do not take account of these developments, their non-universality means they cannot provide adequate estimates of effort and hence cost. In order to address these shortcomings two new estimation methods have been developed for JSD projects. One of these methods JSD-FPA, is a top-down estimating method, based on the existing MKII function point method. The other method, JSD-COCOMO, is a sizing technique which sizes a project, in terms of lines of code, from the process structure diagrams and thus provides an input to the traditional COCOMO method.The JSD-FPA method allows JSD projects in both the real-time and scientific application areas to be costed, as well as the commercial information systems applications to which FPA is usually applied. The method is based upon a three-dimensional view of a system specification as opposed to the largely data-oriented view traditionally used by FPA. The method uses counts of various attributes of a JSD specification to develop a metric which provides an indication of the size of the system to be developed. This size metric is then transformed into an estimate of effort by calculating past project productivity and utilising this figure to predict the effort and hence cost of a future project. The effort estimates produced were validated by comparing them against the effort figures for six actual projects.The JSD-COCOMO method uses counts of the levels in a process structure chart as the input to an empirically derived model which transforms them into an estimate of delivered source code instructions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this letter, we report on the inscription of a fourth-order fiber Bragg grating made line-by-line in the optical fiber using a femtosecond laser. Strong Bragg resonance (~17 dB) and low insertion loss (~0.5 dB) were obtained with only 2000 periods. Measured refractive index change of these inscribed lines reaches up to 7 × 10-3. The grating was fully characterized and the low insertion loss together with low polarization-dependent loss were realized compared to gratings made by the point-by-point method. The high temperature annealing experiment shows the grating can survive up to at least 800°C.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper is partially supported by project ISM-4 of Department for Scientific Research, “Paisii Hilendarski” University of Plovdiv.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Experimental and theoretical studies regarding noise processes in various kinds of AlGaAs/GaAs heterostructures with a quantum well are reported. The measurement processes, involving a Fast Fourier Transform and analog wave analyzer in the frequency range from 10 Hz to 1 MHz, a computerized data storage and processing system, and cryostat in the temperature range from 78 K to 300 K are described in detail. The current noise spectra are obtained with the “three-point method”, using a Quan-Tech and avalanche noise source for calibration. ^ The properties of both GaAs and AlGaAs materials and field effect transistors, based on the two-dimensional electron gas in the interface quantum well, are discussed. Extensive measurements are performed in three types of heterostructures, viz., Hall structures with a large spacer layer, modulation-doped non-gated FETs, and more standard gated FETs; all structures are grown by MBE techniques. ^ The Hall structures show Lorentzian generation-recombination noise spectra with near temperature independent relaxation times. This noise is attributed to g-r processes in the 2D electron gas. For the TEGFET structures, we observe several Lorentzian g-r noise components which have strongly temperature dependent relaxation times. This noise is attributed to trapping processes in the doped AlGaAs layer. The trap level energies are determined from an Arrhenius plot of log (τT2) versus 1/T as well as from the plateau values. The theory to interpret these measurements and to extract the defect level data is reviewed and further developed. Good agreement with the data is found for all reported devices. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we consider the secure beamforming design for an underlay cognitive radio multiple-input singleoutput broadcast channel in the presence of multiple passive eavesdroppers. Our goal is to design a jamming noise (JN) transmit strategy to maximize the secrecy rate of the secondary system. By utilizing the zero-forcing method to eliminate the interference caused by JN to the secondary user, we study the joint optimization of the information and JN beamforming for secrecy rate maximization of the secondary system while satisfying all the interference power constraints at the primary users, as well as the per-antenna power constraint at the secondary transmitter. For an optimal beamforming design, the original problem is a nonconvex program, which can be reformulated as a convex program by applying the rank relaxation method. To this end, we prove that the rank relaxation is tight and propose a barrier interior-point method to solve the resulting saddle point problem based on a duality result. To find the global optimal solution, we transform the considered problem into an unconstrained optimization problem. We then employ Broyden-Fletcher-Goldfarb-Shanno (BFGS) method to solve the resulting unconstrained problem which helps reduce the complexity significantly, compared to conventional methods. Simulation results show the fast convergence of the proposed algorithm and substantial performance improvements over existing approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An optimal day-ahead scheduling method (ODSM) for the integrated urban energy system (IUES) is introduced, which considers the reconfigurable capability of an electric distribution network. The hourly topology of a distribution network, a natural gas network, the energy centers including the combined heat and power (CHP) units, different energy conversion devices and demand responsive loads (DRLs), are optimized to minimize the day-ahead operation cost of the IUES. The hourly reconfigurable capability of the electric distribution network utilizing remotely controlled switches (RCSs) is explored and discussed. The operational constraints from the unbalanced three-phase electric distribution network, the natural gas network, and the energy centers are considered. The interactions between the electric distribution network and the natural gas network take place through conversion of energy among different energy vectors in the energy centers. An energy conversion analysis model for the energy center was developed based on the energy hub model. A hybrid optimization method based on genetic algorithm (GA) and a nonlinear interior point method (IPM) is utilized to solve the ODSM model. Numerical studies demonstrate that the proposed ODSM is able to provide the IUES with an effective and economical day-ahead scheduling scheme and reduce the operational cost of the IUES.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider a linear precoder design for an underlay cognitive radio multiple-input multiple-output broadcast channel, where the secondary system consisting of a secondary base-station (BS) and a group of secondary users (SUs) is allowed to share the same spectrum with the primary system. All the transceivers are equipped with multiple antennas, each of which has its own maximum power constraint. Assuming zero-forcing method to eliminate the multiuser interference, we study the sum rate maximization problem for the secondary system subject to both per-antenna power constraints at the secondary BS and the interference power constraints at the primary users. The problem of interest differs from the ones studied previously that often assumed a sum power constraint and/or single antenna employed at either both the primary and secondary receivers or the primary receivers. To develop an efficient numerical algorithm, we first invoke the rank relaxation method to transform the considered problem into a convex-concave problem based on a downlink-uplink result. We then propose a barrier interior-point method to solve the resulting saddle point problem. In particular, in each iteration of the proposed method we find the Newton step by solving a system of discrete-time Sylvester equations, which help reduce the complexity significantly, compared to the conventional method. Simulation results are provided to demonstrate fast convergence and effectiveness of the proposed algorithm. 

Relevância:

60.00% 60.00%

Publicador:

Resumo:

[EN]The Mediterranean dietary pattern, through a healthy profile of fat intake, low proportion of carbohydrate, low glycemic index, high content of dietary fiber, antioxidant compounds, and anti-inflammatory effects, reduces the risk of certain pathologies, such as cancer or Cardiovascular Disease (CVD). Nutritional adequacy is the comparison between the nutrient requirement and the intake of a certain individual or population. In population groups, the prevalence of nutrient inadequacy can be assessed by the probability approach or using the Estimated Average Requirement (EAR) cut-point method

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study investigated the coralligenous reefs' benthic assemblages at 6 sites off Chioggia, in the northern Adriatic Sea, comparing 2 different methods of analysis of photographic samples: the grid method (overlapping a grid of 400 cells) and the random point method (random distribution of 100 points on the photo). For the first method, taxonomic recognition and the percentage coverage estimations were performed manually using photoQuad software. In the second, CoralNet semi-automated web-based annotation system was applied. This allows for assisted and supervised identification, the success rate of which gradually improves after initial software training. The results obtained with the two methods of analysing photographic samples are slightly different. The random points method gives lower species richness values and some differences in coverage estimations; all of this is reflected in the calculation of the biotic index. NAMBER values are significantly lower with the random points method and provide locally different classifications (3 out of 6 sites). However, the results obtained with the two methods are closely related to each other and depict a similar spatial trend. These results rise caution in applying different, albeit similar, methods in the analysis of benthic assemblages aimed to environmental quality assessment.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A new cloud point extraction (CPE) method was developed for the separation and preconcentration of copper (II) prior to spectrophotometric analysis. For this purpose, 1-(2,4-dimethylphenyl) azonapthalen-2-ol (Sudan II) was used as a chelating agent and the solution pH was adjusted to 10.0 with borate buffer. Polyethylene glycol tert-octylphenyl ether (Triton X-114) was used as an extracting agent in the presence of sodium dodecylsulphate (SDS). After phase separation, based on the cloud point of the mixture, the surfactant-rich phase was diluted with acetone, and the enriched analyte was spectrophotometrically determined at 537 nm. The variables affecting CPE efficiency were optimized. The calibration curve was linear within the range 0.285-20 µg L-1 with a detection limit of 0.085 µg L-1. The method was successfully applied to the quantification of copper in different beverage samples.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The quantitative structure property relationship (QSPR) for the boiling point (Tb) of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDD/Fs) was investigated. The molecular distance-edge vector (MDEV) index was used as the structural descriptor. The quantitative relationship between the MDEV index and Tb was modeled by using multivariate linear regression (MLR) and artificial neural network (ANN), respectively. Leave-one-out cross validation and external validation were carried out to assess the prediction performance of the models developed. For the MLR method, the prediction root mean square relative error (RMSRE) of leave-one-out cross validation and external validation was 1.77 and 1.23, respectively. For the ANN method, the prediction RMSRE of leave-one-out cross validation and external validation was 1.65 and 1.16, respectively. A quantitative relationship between the MDEV index and Tb of PCDD/Fs was demonstrated. Both MLR and ANN are practicable for modeling this relationship. The MLR model and ANN model developed can be used to predict the Tb of PCDD/Fs. Thus, the Tb of each PCDD/F was predicted by the developed models.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In order to verify Point-Centered Quarter Method (PCQM) accuracy and efficiency, using different numbers of individuals by per sampled area, in 28 quarter points in an Araucaria forest, southern Paraná, Brazil. Three variations of the PCQM were used for comparison associated to the number of sampled individual trees: standard PCQM (SD-PCQM), with four sampled individuals by point (one in each quarter), second measured (VAR1-PCQM), with eight sampled individuals by point (two in each quarter), and third measuring (VAR2-PCQM), with 16 sampled individuals by points (four in each quarter). Thirty-one species of trees were recorded by the SD-PCQM method, 48 by VAR1-PCQM and 60 by VAR2-PCQM. The level of exhaustiveness of the vegetation census and diversity index showed an increasing number of individuals considered by quadrant, indicating that VAR2-PCQM was the most accurate and efficient method when compared with VAR1-PCQM and SD-PCQM.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Tool center point calibration is a known problem in industrial robotics. The major focus of academic research is to enhance the accuracy and repeatability of next generation robots. However, operators of currently available robots are working within the limits of the robot´s repeatability and require calibration methods suitable for these basic applications. This study was conducted in association with Stresstech Oy, which provides solutions for manufacturing quality control. Their sensor, based on the Barkhausen noise effect, requires accurate positioning. The accuracy requirement admits a tool center point calibration problem if measurements are executed with an industrial robot. Multiple possibilities are available in the market for automatic tool center point calibration. Manufacturers provide customized calibrators to most robot types and tools. With the handmade sensors and multiple robot types that Stresstech uses, this would require great deal of labor. This thesis introduces a calibration method that is suitable for all robots which have two digital input ports free. It functions with the traditional method of using a light barrier to detect the tool in the robot coordinate system. However, this method utilizes two parallel light barriers to simultaneously measure and detect the center axis of the tool. Rotations about two axes are defined with the center axis. The last rotation about the Z-axis is calculated for tools that have different width of X- and Y-axes. The results indicate that this method is suitable for calibrating the geometric tool center point of a Barkhausen noise sensor. In the repeatability tests, a standard deviation inside robot repeatability was acquired. The Barkhausen noise signal was also evaluated after recalibration and the results indicate correct calibration. However, future studies should be conducted using a more accurate manipulator, since the method employs the robot itself as a measuring device.