您现在的位置: 首页> 研究主题> interpolation

interpolation

interpolation的相关文献在1983年到2022年内共计136篇,主要集中在数学、肿瘤学、自动化技术、计算机技术 等领域,其中期刊论文136篇、相关期刊53种,包括中国科学、武汉大学学报:自然科学英文版、计算机、材料和连续体(英文)等; interpolation的相关文献由333位作者贡献,包括Ali Madad、Arya Kumar Bedabrata Chand、Dariusz Jacek Jakóbczak等。

interpolation—发文量

期刊论文>

论文:136 占比:100.00%

总计:136篇

interpolation—发文趋势图

interpolation

-研究学者

  • Ali Madad
  • Arya Kumar Bedabrata Chand
  • Dariusz Jacek Jakóbczak
  • En-Bing Lin
  • Maciej Paszyński
  • Marcin Sieniek
  • Piotr Gurgul
  • Yousef Al-Jarrah
  • 周颂平
  • 崔利宏
  • 期刊论文

搜索

排序:

年份

期刊

关键词

    • Hira Soomro; Nooraini Zainuddin; Hanita Daud; Joshua Sunday
    • 摘要: Multistep integration methods are being extensively used in the simulations of high dimensional systems due to their lower computational cost.The block methods were developed with the intent of obtaining numerical results on numerous points at a time and improving computational efficiency.Hybrid block methods for instance are specifically used in numerical integration of initial value problems.In this paper,an optimized hybrid block Adams block method is designed for the solutions of linear and nonlinear first-order initial value problems in ordinary differential equations(ODEs).In deriving themethod,the Lagrange interpolation polynomial was employed based on some data points to replace the differential equation function and it was integrated over a specified interval.Furthermore,the convergence properties along with the region of stability of the method were examined.It was concluded that the newly derived method is convergent,consistent,and zero-stable.The method was also found to be A-stable implying that it covers the whole of the left/negative half plane.From the numerical computations of absolute errors carried out using the newly derived method,it was found that the method performed better than the ones with which we compared our results with.Themethod also showed its superiority over the existing methods in terms of stability and convergence.
    • Wolfgang HABER
    • 摘要: Extrinsic information,from satellite observations and the outputs of spatial models,and intrinsic information,from ground observations and spatial sampling,provide two different but complementary streams of information about the Earth’s surface.These extrinsic and intrinsic informations are indispensable for the modelling of the Earth’s surface systems along with an appropriate method for integrating these two kinds of information.Building on this idea,the fundamental theorem for Earth’s surface system modeling(FTESM)was proposed,from which several corollaries have been deduced,corresponding to spatial interpolation,spatial upscaling,spatial downscaling,data fusion and model-data assimilation,respectively(Yue et al.,2016).
    • Liljana Lata
    • 摘要: Albania,like almost every country in the world,is continuously facing challenges in terms of the integrated management of water recourses.Limited access to water resources,the degrading quality of the environment,both being closely related to various policies regarding sustainable development of the water resources,are some of the main issues in this field.In conformity with the requirements of the EU Water Framework Directive Albania has to develop water management plans for seven main river basins(including Shkumbini River Basin),which have been established in the country according to the Decision No.696,date 30.10.2019.The main goal of this study was the development of an integrated hydrological and water management model to evaluate the climate and development scenarios for the Shkumbini River Basin.The study applies the software WEAP(Water Evaluation and Planning)by SEI(Stockholm Environment Institute)to simulate and analyze a set of hydro-ecological and socio-economical scenarios in the Shkumbini River to identify its fundamental vulnerabilities to climate change between the years 2017-2050.Understanding specific vulnerabilities within a basin allows planners to propose and prioritize potential adaptation measures,which can be further examined with cost-benefit analyses.The spatially-based models can incorporate climatic and land use conditions that determine water supply,and this allows the model to investigate diverse changes within the system to consider the various outcomes of uncertain futures,whether climatic,managerial,infrastructural or demographic.
    • William Menke
    • 摘要: Generalized Least Squares (least squares with prior information) requires the correct assignment of two prior covariance matrices: one associated with the uncertainty of measurements;the other with the uncertainty of prior information. These assignments often are very subjective, especially when correlations among data or among prior information are believed to occur. However, in cases in which the general form of these matrices can be anticipated up to a set of poorly-known parameters, the data and prior information may be used to better-determine (or “tune”) the parameters in a manner that is faithful to the underlying Bayesian foundation of GLS. We identify an objective function, the minimization of which leads to the best-estimate of the parameters and provide explicit and computationally-efficient formula for calculating the derivatives needed to implement the minimization with a gradient descent method. Furthermore, the problem is organized so that the minimization need be performed only over the space of covariance parameters, and not over the combined space of model and covariance parameters. We show that the use of trade-off curves to select the relative weight given to observations and prior information is not a form of tuning, because it does not, in general maximize the posterior probability of the model parameters, and can lead to a different weighting than the procedure described here. We also provide several examples that demonstrate the viability, and discuss both the advantages and limitations of the method.
    • Daisuke Hirahara; Eichi Takaya; Mizuki Kadowaki; Yasuyuki Kobayashi; Takuya Ueda
    • 摘要: Background: High-resolution medical images often need to be downsampled because of the memory limitations of the hardware used for machine learning. Although various image interpolation methods are applicable to downsampling, the effect of data preprocessing on the learning performance of convolutional neural networks (CNNs) has not been fully investigated. Methods: In this study, five different pixel interpolation algorithms (nearest neighbor, bilinear, Hamming window, bicubic, and Lanczos interpolation) were used for image downsampling to investigate their effects on the prediction accuracy of a CNN. Chest X-ray images from the NIH public dataset were examined by downsampling 10 patterns. Results: The accuracy improved with a decreasing image size, and the best accuracy was achieved at 64 × 64 pixels. Among the interpolation methods, bicubic interpolation obtained the highest accuracy, followed by the Hamming window.
    • Hao Zhu; Mulan Wang; Kun Liu; Weiye Xu
    • 摘要: In order to solve the problem of complicated Non-Uniform Rational B-Splines(NURBS)modeling and improve the real-time performance of the high-order derivative of the curve interpolation process,the method of NURBS modeling based on the slicing and layering of triangular mesh is introduced.The research and design of NURBS curve interpolation are carried out from the two aspects of software algorithm and hardware structure.Based on the analysis of the characteristics of traditional computing methods with Taylor series expansion,the Adams formula and the Runge-Kutta formula are used in the NURBS curve interpolation process,and the process is then optimized according to the characteristics of NURBS interpolation.This can ensure accuracy,and avoid the calculation of higher-order derivatives.Furthermore,the hardware modules for the Adams and Runge-Kutta formulas are designed by using the parallel hardware construction technology of Field Programmable Gate Array(FPGA)chips.The parallel computing process using FPGA is compared with the traditional serial computing process using CPUs.Simulation and experimental results show that this scheme can improve the computational speed of the system and that the algorithm is feasible.
    • Yuan Chen; Liangtao Duan; Weize Sun; Jingxin Xu
    • 摘要: In this paper,we address the frequency estimator for 2-dimensional(2-D)complex sinusoids in the presence of white Gaussian noise.With the use of the sinc function model of the discrete Fourier transform(DFT)coefficients on the input data,a fast and accurate frequency estimator is devised,where only the DFT coefficient with the highest magnitude and its four neighbors are required.Variance analysis is also included to investigate the accuracy of the proposed algorithm.Simulation results are conducted to demonstrate the superiority of the developed scheme,in terms of the estimation performance and computational complexity.
    • A. A. Obayomi; S. O. Ayinde; O. M. Ogunmiloro
    • 摘要: In this paper, we used an interpolation function with strong trigonometric components to derive a numerical integrator that can be used for solving first order initial value problems in ordinary differential equation. This numerical integrator has been tested for desirable qualities like stability, convergence and consistency. The discrete models have been used for a numerical experiment which makes us conclude that the schemes are suitable for the solution of first order ordinary differential equation.
    • Qingmou Li; Sonya A. Dehler
    • 摘要: Commonly, seismic data processing procedures, such as stacking and prestack migration, require the ability to detect bad traces/shots and restore or replace them by interpolation, particularly when the seismic observations are noisy or there are malfunctioned components in the recording system. However, currently available trace/shot interpolation methods in the spatial or Fourier domain must deal with requirements such as evenly sampled traces/shots, infinite bandwidth of the signals, and linear seismic events. In this paper, we present a novel method, termed the E-S (eigenspace seismic) method, using principal component analysis (PCA) of the seismic signal to address the issue of reliable detection or interpolation of bad traces/shots. The E-S method assumes the existence of a correlation between the observed seismic entities, such as trace or shot gathers, making it possible to estimate one of these entities from all others for interpolation or seismic quality control. It first transforms a trace (or shot) gather into an eigenspace using PCA. Then in the eigenspace, it treats every trace as a point with its loading scores of PCA as its coordinates. Simple linear, bilinear, or cubic spline 1 dimensional (1D) interpolation is used to determine PCA loading scores for any arbitrary coordinate in the eigenspace, which are then used to construct an interpolated trace for the desired position in physical space. This E-S method works with either regular or irregular sampling and, unlike various other published methods, it is well-suited for band-limited seismic records with curvilinear reflection events. We developed related algorithms and applied these to processed synthetic and offshore seismic survey data with or without simulated noises to demonstrate their performance. By comparing the interpolated and observed seismic traces, we find that the E-S method can effectively assess the quality of the trace, and restore poor quality data by interpolation. The successful processing of synthetic and real data using the E-S method presented in this approach will be widely applicable to seismic trace/shot interpolation and seismic quality control.
    • Soad Samir; Eid Emary; Khaled Elsayed; Hoda Onsi
    • 摘要: Copy-move offense is considerably used to conceal or hide several data in the digital image for specific aim, and onto this offense some portion of the genuine image is reduplicated and pasted in the same image. Therefore, Copy-Move forgery is a very significant problem and active research area to check the confirmation of the image. In this paper, a system for Copy Move Forgery detection is proposed. The proposed system is composed of two stages: one is called the detection stages and the second is called the refine detection stage. The detection stage is executed using Speeded-Up Robust Feature (SURF) and Binary Robust Invariant Scalable Keypoints (BRISK) for feature detection and in the refine detection stage, image registration using non-linear transformation is used to enhance detection efficiency. Initially, the genuine image is picked, and then both SURF and BRISK feature extractions are used in parallel to detect the interest keypoints. This gives an appropriate number of interest points and gives the assurance for finding the majority of the manipulated regions. RANSAC is employed to find the superior group of matches to differentiate the manipulated parts. Then, non-linear transformation between the best-matched sets from both extraction features is used as an optimization to get the best-matched set and detect the copied regions. A number of numerical experiments performed using many benchmark datasets such as, the CASIA v2.0, MICC-220, MICC-F600 and MICC-F2000 datasets. With the proposed algorithm, an overall average detection accuracy of 95.33% is obtained for evaluation carried out with the aforementioned databases. Forgery detection achieved True Positive Rate of 97.4% for tampered images with object translation, different degree of rotation and enlargement. Thus, results from different datasets have been set, proving that the proposed algorithm can individuate the altered areas, with high reliability and dealing with multiple cloning.
  • 查看更多

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号