Preview

Proceedings in Cybernetics

Advanced search

Obtaining of Sparse Solutions with D-Optimal Partitioning of Original Sample into Training and Test Parts and Regularity Criterion

Abstract

The article considers the methods of obtaining sparse solutions based on the LS-SVM method. The training samples generated according to the D-optimal experiment design are used for obtaining a sparse solution. An algorithm for forming a test sample using the regularity criterion is also presented. A computational experiment is conducted to check the efficiency of the proposed methods of forming a training sample. The artificial data is used for this purpose. The improvement of the generalizing properties of the models obtained by LS-SVM is carried out through the selection of the Gaussian kernel function parameter while minimizing the prediction error. The kernel function parameter is chosen according to the minimum of external criteria. The criteria of cross-validation and regularity are used as external criteria for assessing the quality of models. The effectiveness of the obtained solutions is estimated by mean square error. The noise variance (noise level) is set as a percentage of the signal power during the data modeling. According to the results, conclusions are made that to receive a sparse solution it is possible to use a training sample, obtained with the use of D-optimal split. The additional optimization of the composition of points (exchange algorithm) is possible to use as well, which in some cases allows getting the best solutions.

About the Authors

A. A. Popov
Novosibirsk State Technical University
Russian Federation


Sh. A. Boboev
Novosibirsk State Technical University
Russian Federation


References

1. Suykens J. A. K., Van Gestel T., De Brabanter J., De Moor B., Vandewalle J. Least Square Support Vector Machines. New Jersey ; London ; Singapore ; Hong Kong : World Scientific, 2002. 290 p.

2. Vapnik V. Statistical Learning Theory. New York : John Wiley, 1998. 736 p.

3. Cherkassky V., Ma Y. Practical selection of SVM parameters and noise estimation for SVM regression // Neural Networks. 2004. № 17. P. 113–126.

4. Попов А. А., Саутин А. С. Определение параметров алгоритма опорных векторов при решении задачи построения регрессии // Сб. науч. тр. НГТУ (Новосибирск). 2008. № 2 (52). С. 35–40.

5. Popov A. A., Sautin A. S. Selection of support vector machines parameters for regression using nested grids // The Third International Forum on Strategic Technology (Novosibirsk), 2008. P. 329–331.

6. Mao W., Mu X., Zheng Y., Yan G. Leave-one-out cross-validation-based model selection for multi-input multi-output support vector machine // Neural Computing and Application. 2014. № 2 (24). P. 441–451.

7. Rivas-Perea P., Cota-Ruiz J., Rosiles J.-G.. A nonlinear least squares quasi-Newton strategy for LP SVR hyper-parameters selection // International Journal of Machine Learning and Cybernetics. 2014. № 4 (5). P. 579–597.

8. Gupta A. K., Guntuku S. C., Desu R. K., Balu A. Optimisation of turning parameters by integrating genetic algorithm with support vector regression and artificial neural networks // The International Journal of Advanced Manufacturing Technology. 2015. № 1–4 (77). P. 331–339.

9. Jsuykens A. K., De Brabanter J., Lukas L., Vandewalle J. Weighted least squares support vector machines: robastness and sparse approximation // Neurocomputing. 2002. Vol. 48. P. 85–105.

10. Степашко В. С., Кочерга Ю. Л. Методы и критерии решения задач структурной идентификации // Автоматика. 1985. № 5. С. 29–37.

11. Сарычев А. П. Усредненный критерий регулярности метода группового учета аргументов в задаче поиска наилучшей регрессии // Автоматика. 1990. № 5. С. 28–33.

12. Степашко В. С. Асимптотические свойства внешних критериев выбора моделей // Автоматика. 1988. № 6. С. 75–82.

13. Попов А. А. Разбиение выборки для внешних критериев селекции моделей с использованием методов планирования эксперимента // Заводская лаборатория. 1997. № 1. С. 49–53.

14. Попов А. А., Бобоев Ш. А. Получение тестовой выборки в методе LS-SVM с использованием оптимального планирования эксперимента // Науч. вестн. Новосиб. гос. технич. ун-та. 2016. № 4. С. 80–99.

15. Попов А. А. Последовательные схемы построения оптимальных планов эксперимента // Сб. науч. тр. НГТУ (Новосибирск). 1995. Вып. 1. С. 39–44.


Review

For citations:


Popov A.A., Boboev Sh.A. Obtaining of Sparse Solutions with D-Optimal Partitioning of Original Sample into Training and Test Parts and Regularity Criterion. Proceedings in Cybernetics. 2018;(3 (31)):162-168. (In Russ.)

Views: 165


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 1999-7604 (Online)