https://doi.org/10.31449/inf.v48i5.5366 Informatica 48 (2024) 97-110 97 Edge Detection and Simulation Analysis of Multimedia Images Based on Intelligent Monitoring Robot Xiaoyan Wang * , Ya Li School of Computer Science and Technology, Nanyang Normal University, Nanyang, Henan, 473061, China E-mail: 2016122095@jou.edu.cn * Corresponding author Keywords: multimedia vision, virtual reality, reconstruction of image texture enhancement related search Received: October 25, 2023 Since the problems of imperfect performance and long time consuming occurs to the multimedia image edge detection according to the current image edge detection method, which makes the multimedia image quality after edge detection not high, in order to remove the edge features contained in multimedia images efficiently and solve the defects of ignoring the texture of multimedia images in the traditional multimedia image edge detection method, an intelligent monitoring robot-based multimedia image edge detection method is put forward in this paper. The method mainly focuses on the similarity characteristics of pixels and edge detection points between adjacent regions of multimedia images, and uses an iterative method to weight adjacent regions of multimedia images, so that the nature of multimedia images is different from the previous traditional images, and the improvement of the viewing mode is loved by the audience, who can experience the immersive state and become the subject of the image when watching the movie.The VR images are corrected by binocular offset positioning and the noise is removed from the corrected images, and the 3D edge detection correlation retrieval method is used to obtain the ensemble and outlier values, obtain the maximum/minimum, find outand correct the errors, and calculate the value for completion.The final results of the experiment suggest that multimedia image edge features can be removed by intelligent monitoring robot effectively withthe optimal multimedia image edge effect at ahigh multimedia image processing speed. Received: Razvita je metoda za zaznavanje robov multimedijskih slik z uporabo robota za inteligentno nadzorovanje, ki izboljša kakovost slik z učinkovitim odstranjevanjem robov. 1 Introduction As a virtual environment created by digital technology [1-2], virtual reality (VR) is a virtual situation and an imaginary world not existing in the real world, and also brings customers multiple sensory experiences such as sight, sound, touch and smell.It offers brand-new 3D perceptual experiences [3-4] to customers, who can experience the virtual world through display or data glove tools [5].Multimedia image edge detection computing creates a novel visual world for customers, and this approach is currently applied to several fields and is a new development trend.Conventional forms of viewing are experienced by the audience following the camera, and the director controls the creation of works.Totally different from the conventional viewing form (passive reception of information), multimedia imagesare a game-changer (active reception), and the audience no longer needs to experience by following the camera. Instead, they can justgo by favorite free conversion, so everyone's visual perception and experience of the impact of VR is different. Hence, in the film and television industry, multimedia images are an irresistible trend in the future, which will impress the audience with new experiences based on new technical features and viewing models and facilitate the interaction of images with viewers [6-7]. If language is the traditional tool of communication, in multimedia images, the sense of hearing and touch cannot be controlled by the media. Multimedia images represent a connection between the real and virtual worlds by bringing the sensibility close to nature and concrete physical objects, rather than using the symbolism of symbols to understand the expression of the work. Another way of expression is that VR does not use words nor needs explanation.The customer uses the algorithm of edge detection of multimedia images to reach the virtual world and experience the visual, auditory, and tactile sensations [8-9]. In the multimedia image, the audience is the first to experience the visual and auditory senses; multimedia images are noticed because of the 98 Informatica 48 (2024) 97–110 X. Wang et al. sensory experience. Of course, it should be noted that the content of the work is also crucial, and VR effect will be less impressive if the audience cannot resonate with the plot.In the case of boring plot, even the best media will not bring the customer's love.Multimedia images and artwork formats are different from traditional images.The traditional image is mainly to get the customer's sensory effect. Multimedia images, on the other hand, are more focused on the immersion of the audience. Meredith bricken may be using the VR edge detection calculation method, if VR customers are fully into the state, they will be able to control the process.This effect is characteristic of multimedia images, a special relationship between viewers and images, which is a new model for learning knowledge.For example, views can choose fish (or birds) in the virtual world as they wish.The ability to gain knowledge from the animal's point of view, as well as the ability to both experience the divine feeling of the animal. In multimedia images, the audience is able to switch perspectives according to their preferences.In order to guide the audience's vision, the director of multimedia images should use the latest technology of photography at the beginning of the work to make it unfold smoothly. The viewing of multimedia images is not a one-sided act; it is an interactive form of dialogue. The computer is able to make timely updates according to the preferences of the audience, providing works corresponding to the needs of the audience. Of course, this interaction can also distract the audience's concentration [10-11].The more varied images of VR can especially disturb the audience's concentration and therefore can lead to a larger number of audiences losing interest in the director's shots.It is the result of intoxicated virtual world instead of senseless stories story. In the process of producing multimedia images, in order to overcome this series of drawbacks, the director uses long shots for correct stage deviation; stereo backgrounds, enhanced lighting and special effects in the form of attracting the audience's focus and making the director's pre-determined work unfold through perspective and creativity. When shooting a traditional film, the director's idea plus the camera's vision shoots the work through multiple angles, but the change of angle prevents the audience from getting into the plot of the work as soon as possible.Hence, the director completes the production through special editing and uniqueshooting angles.Multimedia and traditional images are different in that they are not influenced by the viewer's vision.VR can enter directly into the role of the work.From the above, we can see that the shooting of multimedia images is also different from the traditional video shooting.By using new technology, the work can be better displayed.It is also able to be adjusted according to the audience's experience in time [12-13].The accuracy and speed of VR edge detection calculation method for multimedia vision is now the primary issue.The accuracy of the calculation can be improved by the multimedia image edge detection calculation method, reducing the corresponding calculation time, and using the mean value method.The premise of retaining its original nature is applied to realistic situations by using virtual edge detection. Table 1 depicts the summary of related studies. Table 1: Summary of related works Ref. no Methods Findings Drawbacks [27] Bluetooth Low Energy (BLE) The validation of the experimental system demonstrated that the fuzzy controller attains exceptional accuracy in detecting and avoiding obstacles, while also making directional decisions with minimal computational time. Despite its energy efficiency and low power consumption, Bluetooth Low Energy (BLE) faces certain drawbacks, such as a comparatively lower data transfer rate than classic Bluetooth. This limitation can impact the performance of applications that demand high data throughput. [28] Human machine interface (HMI) The smart glove, equipped with PDMS-CB strain sensors, effectively controls the remote motion of the robot finger, as demonstrated in experiments. A significant obstacle arises from the possibility of information overload, wherein an abundance of data and intricate interfaces has the potential to inundate users, resulting in both confusion and errors. [29] Level Of Robot Autonomy (LORA) While teleoperating systems have achieved advanced technical maturity, collaborative assisting and autonomous systems are currently in the research phase, emphasizing advancements in ultrasound image processing and force adaptation strategies. A notable drawback is its inclination to oversimplify the intricate nature of autonomous systems. LORA utilizes distinct levels, spanning from teleoperation to full autonomy, potentially failing to adequately grasp the nuanced capabilities Edge Detection and Simulation Analysis of Multimedia Images… Informatica 48 (2024) 97–110 99 exhibited by robots. [30] Robot Operating System (ROS) The Robotics-Academy has proven effective in numerous undergraduate and master's degree engineering courses, as well as in a pilot course designed for pre-university students. ROS does not possess real-time capabilities, rendering it less ideal for tasks demanding accurate timing and low-latency control, such as specific industrial automation applications. Furthermore, a limitation arises from ROS's initial design geared towards research and prototyping, potentially hindering its seamless integration into extensive production deployments without additional optimization endeavors. 2 Intelligent monitoring robot Multimedia images encompass digital materials that blend various forms of media components, including images, graphics, and occasionally audio or video, with the purpose of conveying information or crafting a visually captivating encounter. Within the realm of multimedia, the significance of images extends to their pivotal role in improving communication, storytelling, and the overall user experience. Their versatility is evident across a spectrum of applications such as websites, presentations, educational materials, entertainment, and beyond. Multimedia images exhibit a broad range of applications and are constantly adapting to technological advancements. As technology advances, the incorporation of multimedia components becomes progressively refined, delivering heightened levels of immersion and engagement for users. Intelligent monitoring robot vision imaging is used as the eyes of workers in the process of processing and manufacturing, which is an essential signal basis for intelligent monitoring robot to carry out the trajectory movement.It mainly converts the collected features such as color and shape of the workpiece into digital signals and transmits them to the general control system of the intelligent monitoring robot so as to control the operation of the lower machine [14]. Hence, all features required in the processing of the workpiece should be acquired during the signal acquisition process to ensure that the image is fully represented.The two key elements that can influence the effectiveness of vision imaging of intelligent monitoring robots are image acquisition and image processing. In addition to hardware facilities such as vision sensors, external light sources also have a significant influence on the stability of image acquisition.Therefore, a lighting system isoften added to the vision system of intelligent monitoring robots during the image acquisition. The hand-eye calibration method is defined mainly in accordance with the roles that the camera and the end-effector of the intelligent monitoring robot actuate, and the coordinate system of the camera is connected to the coordinate system of the intelligent monitoring robot to reflect its relative position relationship [15].According to the camera fixation method, there are two types, that is, Eye-in-Hand and Eye-to-Hand, which are shown in Figure 1 and Figure 2 below, respectively. Figure 1: Eye-to-Hand hand and eye detection system Figure 2: Eye-in-Hand hand and eye detection system In the depicted system shown in Figure 1, the camera remains stationary, while the robot tool coordinate system adapts to the trajectory's motion. Only the robot base coordinate system remains constant. Consequently, the camera coordinate system and the robot base coordinate system exhibit relative stability, allowing the determination of their positional relationship. The process 100 Informatica 48 (2024) 97–110 X. Wang et al. begins with determining the relative position characteristics of the workpiece captured by the camera once the robot vision system is in place. Subsequently, the position characteristics of the work piece are identified within the base coordinate system of the intelligent monitoring robot. Finally, the relative position relationship between the camera coordinate system and the base coordinate system of the intelligent monitoring robot is obtained through matrix transformation [16-17]. In the depicted system shown in Figure 2, the camera remains active as the robot base coordinate system remains stationary. In this scenario, the camera coordinate system cannot be stabilized in relation to the robot base coordinate system. Instead, the camera is affixed to the robot end-effector, and its coordinate system remains fixed in relation to the robot end-effector coordinate system. To achieve vision localization in the intelligent monitoring robot, the relative positional relationship between the camera coordinate system and the robot base coordinate system can be determined by acquiring the positional characteristics of the workpiece in both the camera coordinate system and the intelligent monitoring robot end-effector coordinate system. The camera parameter change matrix and the robot end position change matrix can be used to obtain the hand-eye relationship matrix.The two hand-eye calibration systems are compared, and the results suggest that the calibration based on the Eye-in-Hand hand and eye system is evidently more complicated, which is one of the popular topics in the research of vision system for intelligent monitoring robots at present. 3 Multimedia image edge detection The multimedia image peripheral test module is designed with a focus on efficiently identifying and handling test objects. It mandates that the intelligent robot engaged in image discrimination processing initiates training on the current object beforehand, acquiring characteristic data of the target object to establish a useful reflection table. During system operation, the current work object is consolidated, and the extraction of characteristic values is activated. Subsequently, these values are compared with those in the reflective table, culminating in the successful identification of the target object. The specific design steps for the multimedia image peripheral test software are illustrated in Figure 3 below. Figure 3: Software design flow of the multimedia image edge detection module From Figure 3 above, it can be observed that the intelligent robot in the multimedia image around the test process to start the multimedia image in advance of the processing operation. In this way, the maximum possible simplification of multimedia image information, while the valuable information and detectability can be enhanced.Multimedia image cutting is atechnique and process of dividing multimedia images into a certain number of specific ranges with particular properties and referring to the target of interest.Multimedia image feature value acquisition is to identify the most efficient feature from a large number of featuresand to set up a feature reflection table for matching to the target object accordingly.The multimedia image matching is carried out based on the comparison of the acquired characteristic amount and the characteristic reflection table, which is used to achieve the purpose of identifying the target object. The feedback on the digital signals are given from the vision system of the intelligent monitoring robot to the general control system of the intelligent monitoring robot, which is required to collect information on the characteristics of the workpiece. When the camera acquires feature information of the workpiece, a large amount of redundant information is generated due to the external environment, which requires the camera to extract the required feature information from the huge amount of acquired information in a certain way. In this paper, the workpiece feature extraction is analyzed by two methods, that is, the threshold segmentation method and the edge detection method. Edge Detection and Simulation Analysis of Multimedia Images… Informatica 48 (2024) 97–110 101 When the feature information of the workpiece is extracted by threshold segmentation, there are two types of threshold values, that is, fixed thresholds and automatic thresholds. The threshold value set by fixed thresholding is fixed. In the case of a changing external environment, the fixed threshold will not be changed along with the change of the environment, which can severely influence the accuracy of the captured image, as shown in Figure 4 below[18-19]. Figure 4: Acquired image of the workpiece in the fixed threshold case Based on the automatic threshold value, the algorithm can make adjustment automatically in response to the changes in the external image pixels and automatically select a suitable threshold value based on the gray point distribution of image pixels. This method is suitable for Eye-in-Hand hand-eye systems, in which the graphics captured by the camera change continuously with the robot motion and the graphics grayscale value. The details are shown in Figure 5 as the following. Figure 5: Adaptive threshold value After the threshold segmentation is carried out to obtain the feature information of the artifact, edge detection is often used to obtain acquire accurate graphic information. In addition, the methods such as Canny operator are applied to analyze and calculate the edge detection results and achieve the effect of accurate processing of the image[20]. In the calculation process, the global gradient is first calculated by smoothing the image based on Gaussian filtering, removing the pixel points where the local gradient is not the highest, keeping the pixel points with the largest local gradient, eliminating the false edge segments generated due to noise or color changes, and using a high threshold value and a low threshold value to distinguish the edge pixel points. In general situations, the high threshold value is about twice the low threshold value. Through suppressing the points smaller than the low threshold value, the effect of edge detection can be achieved. The details are shown in Figure 6 and Figure 7 as the following. Finally, the workpiece edge pixel points, that is, the contours of the workpiece image, are extracted. Figure 6: Edge detection results Figure 7: Contour detection results 3.1 Intelligent monitoring robot algorithm In accordance with the intelligent monitoring robot control algorithm, it is mainly implemented for the multimedia image edge detection visualization during the problem addressing process. For the purpose of resolving this problem thoroughly, it is necessary to optimize the parameters in the multimedia image recognition model by using the intelligent monitoring robot control algorithm in combination with the iterative strategy in this paper. The iterative value refers to the features of the intelligent monitoring robot control algorithm in using the intelligent 102 Informatica 48 (2024) 97–110 X. Wang et al. monitoring robot control algorithm in combination with the multimedia image recognition model based on the conventional vector, which is different in the sharing of parameters, while the intelligent monitoring robot control algorithm can enable the sharing between the local connection and the data multimedia image edge detection. Different from the traditional approach, in the intelligent monitoring robot control algorithm, the initial features of input signals in the data acquiredare used based on layer visualization. The difference in the mean amplitude of time domain features, the short-term energy, etc., are also applied. The initial input features of the multimedia image recognition model proposed in this paper apply the short-term energy in this paper [21-22]. Thus, the multimedia image recognition multimedia image edge detection signal can be expressed by using x(n), and the energy balance within the short time period can be expressed as the following: ( ) ( ) ( ) ( ) ( ) ( ) 2 22 n mm E x n w n m x m h n m x m h n  = −  = −  = − = − =     (1) However, the high dimensionality of the time-domain features under initialization can lead to a huge amount of interference and noise. In this way, it is required that the input visualization signal should to be processed for dimensionality reduction.At this point, the intelligent monitoring robot control algorithm is used to count the investigated multivariate and analyze the multivariate structure through the analysis of primary components. Dimensionality reduction is performed to process the data on multimedia images to visual the multimedia image edge detected in the input data. The output data are processed by the intelligent monitoring robot control algorithm based on deep learning. In multimedia image edge detection, feature vectors are observed at various positions during the visualization of the network, which showsthat the network can have different visualization capabilities. The level of feature vector is used to denote this property. During multimedia image recognition, if feature vector (TE) is observed n times, as shown in the equation (2) below, then k T is the level of the kth occurrence of the feature vector T in the feature vector level score during the multimedia image recognition. ( ) ln 1 1 2 k n T k score T = =  (2) The feature vector weight can be expressed as the following ( ) ( ) k w T score T idf = (3) When the similarity in weights between query Q and feature vector phrase D is calculated, the ratio of QD cross-section weight to total QD weight is used due to the large computational workload and high time overhead of the regular VSM similarity. ( ) ( ) ( ) ( ) ( ) ( ) 1 11 , QD qk dk k QD qk dk kk w T w T sim Q D w T w T = == + =  +      (4) With regard to the vector space multimedia image recognition model, the conventional method for calculating the visual vector space for multimedia image edge detection is to calculate the cosine similarity between the vectors[23-24]. Thus, the visual vector space for multimedia image edge detection of user u and multimedia image d can be defined as the following ( ) , ud Sim u d ud  =  (5) The region distance metric is an essential benchmark for performing region integration, and the distance measurement benchmark can directly determine the results of region combination and the final image edge detection results.In general, the condition required to merge two regions is that the two regions are spatially adjacent and similar to each other in color and that no significant edges appear in their proximity. The color distance and edge distance between regions are specified accordingly. The color distance can be obtained as the following 2 ij c ij i j ij rr D rr  =− + (6) The edge distance can be obtained as he following ( ) ( ) e ij D Ave i Ave j =− (7) Edge Detection and Simulation Analysis of Multimedia Images… Informatica 48 (2024) 97–110 103 Through the inverter nodes, and finally acting the actuator on everything in that environment shows the characteristics of autonomy, mutuality, purpose, sociality, cooperation, sustainability, adaptability and distribution. If the system needs to increase the number of remote monitoring nodes, the intelligent monitoring robot algorithm of this monitoring node need only to be adjusted to a communication mode suitable for this transformer less single-phase PV grid-connected inverter. Then, it is sufficient adding the information-behavior graph of this node. When the communication mode is changed in this way, the intelligent monitoring robot algorithm of the monitoring node is adjusted to fit the communication mode of this new electrified railroad. If a client queries a video image, the USG first confirms the client's ID and permission, connects to the video control center via the intelligent monitoring robot algorithm, and the video control center analyzes the letter based on the letter/action diagram after receiving the user message. Then, the corresponding remote monitoring node video grouping is forwarded to the client, confirmed to the client, or the corresponding alarm video data is found in the video database and returned to the client, and other actions. Single-phase PV grid-connected base-wave voltage s v  interacts with the base-wave overrun current of the LC branch in the same phase transfers active power between the two converters, causing a change in the mean value of the DC capacitor voltage. Ignoring the converter loss, according to the law of conservation of energy, then ( ) 1 2 sd cd sq cq dc dc dc o v i v i v Cv v i + = + (8) Where, C is the DC bus capacitance. Orientate 0 sq v = based on the d-axis, then 1 2 sd cd dc dc dc o v i v Cv v i =+ (9) Let sd sd sd cd cd cd dc dc dc o o o v V v i I i v V v i I i = +    = +    = +    = +   (10) Here, the first term on the rightof equation (9) above indicates a steady state equilibrium quantity and the second term is the variation quantity. In equation (10), let the irrelevant perturbation 0 sd v = 0 o i =,, and substitute it into equation (9), neglecting dc dc v C v  , to obtain 11 22 sd cd sd cd dc dc dc o dc o V I V i V C v BV I v I +  =  + +  (11) Since the two converters have to ensure the energy balance, they must satisfy 1 2 sd cd dc o V I V I = (12) Then equation (13) is further simplified as ( ) ( ) 1 2 sd dc cd dc o V Vs I s CV s I  = + (13) Let dc sd VV  = ,  is the adjustable coefficient, o dc o I V R = Formula (13) can be reduced to ( ) ( ) 1 21 dc o cd o Vs R I s R Cs   = + (14) Intelligent monitoring robot clustering algorithm is currently the most popular and one of the most widely used cluster analysis methods.The intelligent monitoring robot algorithm has the features of intuitive and easy to implement image edge detection at high speed, but the biggest problem is to pre-determine the number of clusters. In addition, the selection of the initial cluster centers has a significant impact on the classification results. On the other hand, image edge detection using the clustering algorithm is sensitive to noise due to the use of only the color information of the pixels without using that spatial information, which can cause image edge detection. 104 Informatica 48 (2024) 97–110 X. Wang et al. 3.2 Improved algorithms for multimedia image edge detection When calculated according tothe correlation function, individual feature vector of the target in the image subject to multimedia image edge detection can affect the data on edge detection substantially. Errors are prone to occur if target individual stripes have few features, making it harder to reconstruct the image and enhance the specific stripes of those reconstructed. Hence, the features of original image patterns and the accuracy of results can be improved by superimposing high-frequency data from raw imageson the selected image. After denoising, x I becomes the image grayscale sequence in multimedia image edge detection, where x = P. N, In the case of obtaining the target individual edge region feature distribution, the grayscale conversion equation 1 0 1 1 , , , 2 Q i c Q i binary i Dce WW j ix j S S S S S SI − −    = =     =   (15) Here, Q representsthe grayscale next to the target region, W represents the conversion step, and the noise removal function is ( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 , 1,2, , , 1,2, , ii x ii x x k Q k w k k m z k H k k v k i m + = + = = + = (16) where ( ) i wk and ( ) i vk } represent pixel noise in the edge region of target individuals, ( ) i Qk and ( ) i Hk are distributed in a balanced manner (mean 0, variance ( ) i Sk to buildthe fuzzy set   ik uu = andbreak down the stripe featuresto obtain the output of denoising as follows ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 11 1 ;; NN N N N GSM i i i i i i i i ii N i i i i i i I I C D s I C D s h D s h D C s h g C V s h V == = = = = − = + −   (17) Considering the differences between edge detection image regions, information on spatial ripple characteristics is described by ( ) ( ) ,, n u x y d , pixel gray intervals are built in gradient to obtain an iterative equation for denoising of images: ( ) ( ) ( ) ( ) ( ) ( ) 1 1 , , , n n n u x y u x y u x y  + =+ (18) ( ) ( ) ( ) ( ) ( ) ( ) 1 , , , , n n n xt u x y M u x y N u x y d =  +  (19) Here 1,2, , nT = is used to denote the number of iterative steps. The interference noise removal method reduces its influence on the image, and it completes the improvement of the intelligent monitoring robot calculation method. In the process of determining the directional coordinates ( ) 12 , r n n , if the global search is applied, it will lead to the error of the peripheral detection. Therefore, the search scope should be decreased by rebuilding the interconnectedness between levels and plan to reduce the occurrence of the error of the peripheral detection. According to the provisions of the horizontal two-fold limit, if the corrected longitudinal deviation continues to 0, then the longitudinal search range can be reduced. In CTF's 3D peripheral detection composition, according to the presumption of arbitrary reconstruction level of connectivity, it can reduce the lateral search area [25]. The maximum value of the low-pass filtering process is calculated again for the increase in ( ) 12 , r n n . The maximum value range can be reduced as shown in Equation (20) ( ) 1 1 2 2 12 1, , , 0, k U k U h k k otherwise     =    (20) If the corresponding offset interval between horizontal and vertical is very small, then the intelligent monitoring robot algorithm cannot guarantee the accuracy of offset data, specifically in the case of a large number of values, and the correction function of the CTF architecture cannot be used properly. The restricted area is too large and does not have the ability to reduce the error reconstructed. Therefore, the optimal search range is determined based on U1 and U2. (2) Misidentification and peak relocation Edge Detection and Simulation Analysis of Multimedia Images… Informatica 48 (2024) 97–110 105 In the initial calculation method, after the reconstruction, the wrong reconstruction can be evaluated and modified only once, but in practice there is a process of intermediate reconstruction, which leads to the extension of the reconstruction error to the next resolution. At the same time, the evaluation and correction criteria are fixed critical values th  and maximum values of α. However, the comparison results use a more accurate dynamic device to determine the critical values th  , because the full maximum value of each resolution varies greatly [26]. The minimum feature value r(i,j) of the image acquiredbased on the intelligent monitoring robot control algorithm is the image edge feature to be calculated, followed by calculating the difference between feature r and feature(i,j) in coordinates. Thus, the optimal visualized image can be obtained as required. Upon the calculation based on the intelligent monitoring robot control algorithm corresponding to the elements of the two feature arrays, it is required to determine whether it exceeds the minimum at present. If so, the intelligent monitoring robot control algorithm is ended accordingly during the calculation process. The mapping is generalized to obtain the corresponding generalized mapping equation as the following: 0 0 mod n n n xx ab N yy cd      =           (21) From the equation (1) above, it can be known that there is an immobile point (0, 0) in this mapping. In other words, the point (0,0) will remain unchanged even after n iterations of the mapping. Subsequently, the value of the coordinate point is changed to {1, 2, N} so as to avoid the immobility point. In accordance with {1, 2,… N}, the mapping equation is converted to a format that contains two independent parameters as follows. 0 0 1 mod 1 1 n n n xx q N yy q pq      =+      +      (22) The equation (22) above expresses the result by taking 1 as the transformed coordinates ( ) , nn xy after n repetitive operations based on the model N from the initial value point ( ) 00 , xy . 4 Practical cases and data analysis The DCNN networks run in Intel on a Core i7 PC with 40 GB RAM. Our proposed model was trained on an NVIDIA GTX 1070 GPU and implemented by Python. The test for the image edge detection performance can be assessed mainly based on the precise location of the image edge, the continuity of the image edge, and the edge width. In this paper, a king of assessment method for image edge performance is adopted, which is defined as the following: ( ) 1 1 1 max , 1 D I DL i i p II I  = = +  (23) The CT image of the human head is selected as the target for simulation evaluation, and the effectiveness of the image edge detection method put forward in this paper is tested accordingly. The details are shown in Figure 8 below. At the same time, the Canny algorithm and the Sobel algorithm are used for comparison and analysis. Figure 8: Selected CT image of the human brain The results of the test obtained based on the different methods are shown in Figure 9 as the following.In accordance with the analysis of the test results in Figure 9, it can be observed that the method adopted in this paper is more ideal as compared to the image edge test method; the image edge continuity thus obtained is high, and the accuracy of detection is also the highest. Excessive details of the image areacquired from the test based on the Canny algorithm. In addition, some of the edge details are not detected, and the image has been distorted. The incorrect detection and omission in the image edge test obtained based on the Sobel algorithm are 106 Informatica 48 (2024) 97–110 X. Wang et al. relativelysevere, which fails to achieve the expected results. Figure 9. Effect of image edge detection based on different methods As shown in Figure 8 above, it can be observed thatif there is interference of different degrees of external noise, followed by the application of different methods to obtain the edges of the image, the analysis results of the P value for the evaluation index can be obtained. The details are shown in Figure 10. In accordance with the test results illustrated in Figure 10, relatively good curve fitting can be achieved by using the proposed method, which can improve the detection accuracy of image edgeseffectively.The comparative analysis suggests that the proposed method is superior interms of noise resistance and high detection accuracy in the process of image edge detection. Figure 10: Performance index of image edge detection based on different methods The method in this paper is applied to the multimedia image design department of an enterprise, mainly users evaluate the works of multimedia image designers in this department to further grasp the shortcomings of the works and implement improvements. The adoption rate results of different types of multimedia image works tested after using this method in this department are shown in Figure 11. By analyzing Figure 11, it can be seen that after applying the method of this paper, the lowest adoption rate of multimedia image design in this department is 0.98, and many multimedia images works in this department are adopted. The figure indicates that the method has outstanding performance and can be used as one of the application methods for multimedia image companies to improve the quality of multimedia images. Fig. 11: Test results of intelligent monitoring robot algorithm. 4.1 Peak signal-to-noise ratio (PSNR) It's a statistic that's frequently used to assess the manner in which compressed or rebuilt photos and movies are doing. The ratio of a signal's maximal potential strength to the amount of corrupting noise that degrades the representational fidelity is called PSNR. Table 2 and figure 12 depicts the outcomes of PSNR. Table 2: Comparison of PSNR Methods MSE V-bpp Edge-XOR [31] 0.288 Improved Hash [31] 0.409 Fractional Fourier Transform [31] 17.58 Median Filter [31] 24.66 Ant Colony Optimization [31] 8.233 Proposed 0.125 Edge Detection and Simulation Analysis of Multimedia Images… Informatica 48 (2024) 97–110 107 Fig. 12: Analysis of PSNR. 4.2 Mean squared error (MSE) The average of the squared variance of the actual and projected values is measured using the mean squared error (MSE) metric in analytics and machine learning. It is computed as an average of the squared discrepancies among the anticipated and actual results, and is frequently used in regression analyses. Table 3 and figure 13 depict the outcomes of MSE. Table 3: Comparison of MSE Methods PSNR V-bpp Edge-XOR [31] 53.532 Improved Hash [31] 47.559 Fractional Fourier Transform [31] 25.786 Median Filter [31] 18.85 Ant Colony Optimization [31] 8.975 Proposed 62.872 Fig. 13: Analysis of MSE. 4.3 Discussion Bluetooth Low Energy (BLE) [27] has various limitations, such as a slower data transmission rate than traditional Bluetooth, even though it is acknowledged for its energy economy and low power consumption. One of the key challenges in the field of Human Machine Interface (HMI) [28] is the possibility of information overload, which occurs when users are inundated with too much data and complex interfaces, which can result in mistakes and confusion. Moreover, ROS's lack of real-time capabilities makes it less appropriate for applications requiring low latency control and accurate timing. One important drawback of Level of Robot Autonomy (LORA) [29]. Its tendency to oversimplify the complexity of advanced autonomous systems is a notable aspect. Nevertheless, our proposed approach adeptly navigates and effectively mitigates these challenges, ensuring a comprehensive and satisfactory resolution to the aforementioned difficulties. 5 Conclusion In the process of reconstruction, data information transmission, and sending and receiving of digital multimedia images, due to the limitations of multimedia image restoration device quality and multimedia image edge detection methods, different types of edge feature interference of multimedia images will emerge. In VR, the audience can adjust the line of sight and enquire the information onthe environment, which is impossible based on traditional images.This paper adopts the matching registration to lock the peak range of the image, calculates the threshold and optimizes the model, evaluates and corrects the wrong data, and calculate the max, min, and depth.After improvement, the noise during transition can be reduced, and the calculation is performed at a higher speed. It has a better feeling for multimedia images and maintains both robustness and feasibility. Based on the weight of the intelligent monitoring robot, the relationship between each pixel of the multimedia image and the size of the pixel to be edge detected can be calculated, and the correlation between multimedia image pixel and local data can be used appropriately. To a certain extent, it can effectively solve the processing of multimedia image edge features and multimedia image edge details.Finally, the experimental research shows that the intelligent monitoring robot used in this paper has a higher PSNR value than the traditional multimedia image edge detection algorithm, and the iterative adaptive method can effectively process the edge details of the multimedia image, and has a better multimedia image processing effect. 108 Informatica 48 (2024) 97–110 X. Wang et al. Limitations and future study The capacity of an appliance to retain its functionality and efficacy under varied situations and in the face of numerous obstacles is referred to as robustness in many domains, such as machine learning, programming, or safety. The resolution of the monitoring robot's cameras can constrain the quality of multimedia images, with higher-resolution cameras typically being more costly and resource-intensive. The swift analysis and interpretation of images by monitoring robots may be impeded due to constraints in processing capabilities, as real-time processing of multimedia data demands significant computational power. Enhancing the robot's perception capabilities, especially in demanding environmental conditions, is achievable by incorporating a variety of sensors like infrared, lidar, radar, and cameras. It is crucial to establish ethical guidelines and legal frameworks for deploying monitoring robots, with a focus on prioritizing transparency, accountability, and the protection of privacy rights in upcoming research endeavours. Data aailability The data used to support the findings of this study are included within the article. Conflicts of Interest The authors declare no conflicts of interest. Funding statement This research study is sponsored by these projects: project one: The Humanities and Social Sciences Research Projects of the Ministry of Education, the name of the project is Research on university learning behavior model based on the fusion of spatio-temporal trajectory data and multi-source data, the project number is: 19C10481026. Project two: Key research and development and promotion projects in Henan Province, the name of the project is Research and development of medical endoscope system based on narrow band imaging and intelligent image fusion, the project number is: 192102310457. Thank these projects for supporting this article! References [1] Yue, J. J., & Chen, G. (2020). Competence of pharmacy mentors: a survey of the perceptions of pharmacy postgraduates and their mentors. BMC Medical Education, 44(2), 113-117. [2] Li T Y. (2021). The construct of english competence and test design for non-english major postgraduates. English Language Teaching, 23(5), 531-55. [3] Moskal, M., &Schweisfurth, M. (2018). Learning, using and exchanging global competence in the context of international postgraduate mobility. Globalisation, 34(7), 1104-1109. [4] Dong, Q. (2020). Analysis on the ways to improve thesis writing ability of postgraduates in design direction of [5] Xiao, Y., Wu, X. H., Huang, Y. H., & Zhu, S. Y. (2021). Cultivation of compound ability of postgraduates with medical professional degree: the importance of double tutor system. Postgraduate Medical Journal, postgradmedj-2021-139779. [6] Li T Y. (2021). The construct of english competence and test design for non-english major postgraduates. English Language Teaching, 57, 237-248. [7] Liang, L., Li-Ye, H. E., & Chen, B. J. (2020). Discussion on the cultivation of scientific research ability of postgraduates majoring in medicine. DEStech Transactions on Social Science Education and Human Science(aems), 38(1), 55-61. [8] Feng, L. . (2016). Study on practical innovation ability cultivation of management postgraduates based on qfd. DEStech Transactions on Social Science Education and Human Scienc, 39(6), 1006-1011.. [9] Taghizadeh, N. (2020). Research on the practical ability of postgraduates majoring in marxist theory—based on a survey of universities in chongqing. Advances in Social Sciences, 09(5), 641-650.. [10] Rezaie, A. A. , &Habiboghli, A. . (2017). Detection of lung nodules on medical images by the use of fractal segmentation. International Journal of Interactive Multimedia & Artificial Intelligence, 4(5), 15-19.. [11] Sun, H. L. , Peng, X. , & University, Y. . (2018). On practical ability improvement-oriented cultivating mode of professional degree postgraduates. Heilongjiang Researches on Higher Education, 2021(4), [12] Han, X. , Singh, B. , Morariu, V. , & Davis, L. S. . (2015). Vrfp: on-the-fly video retrieval using web images and fast fisher vector products. IEEE Transactions on Multimedia, 1-1. Edge Detection and Simulation Analysis of Multimedia Images… Informatica 48 (2024) 97–110 109 [13] Su, G. M. , Zhang, C. , H Wang, Chen, M. , &Lienhart, R. . (2019). Pushing the boundary of multimedia [14] Li, &Xirong. (2017). Multimedia systems (accepted) (will be inserted by the editor) tag relevance fusion for social image retrieval. Multimedia Systems, 23(1), 29-40. [15] Kim, W. , Lee, S. , &Bovik, A. C. . (2021). Vr sickness versus vr presence: a statistical prediction model. IEEE Transactions on Image Processing, 30, 559-571. [16] Zhao, S. , Shmaliy, Y. S. , Ahn, C. K. , & Liu, F. . (2018). Adaptive-horizon iterative ufir filtering algorithm with applications. IEEE Transactions on Industrial Electronics, 65(8), 6393-6402. [17] Zheng, J. , Yang, P. , Chen, S. , Shen, G. , & Wang, W. . (2017). Iterative re-constrained group sparse face recognition with adaptive weights learning. IEEE Transactions on Image Processing, 26(5), 2408-2423.. [18] Zeng, J. , Sui, X. , & Gao, H. . (2015). Adaptive image-registration-based nonuniformity correction algorithm with ghost artifacts eliminating for infrared focal plane arrays. IEEE Photonics Journal, 7(5), 1-16.. [19] Ahmed, F. , & Das, S. . (2014). Removal of high-density salt-and-pepper noise in images with an iterative adaptive fuzzy filter using alpha-trimmed mean. IEEE Transactions on Fuzzy Systems, 22(5), [20] Chen, G. , Li, D. , & Zhang, J. . (2014). Iterative gradient projection algorithm for two-dimensional compressive sensing sparse image reconstruction. Signal Processing, 104, 15–26. [21] Hao, Z. , Xianqi, L. , Yunmei, C. , Jewook, P. , An-Ping, L. , & X.-G., Z. . (2017). Postprocessing algorithm for driving conventional scanning tunneling microscope at fast scan rates. Scanning, 2017, 1-8. [22] Chen, Z. , Fu, Y. , Xiang, Y. , & Rong, R. . (2017). A novel iterative shrinkage algorithm for cs-mri via adaptive regularization. IEEE signal processing letters, 24(10), 1443-1447.. [23] Jiang, G. , Luo, M. , & Bai, K. . (2019). Optical positioning technology of an assisted puncture robot based on binocular vision. International Journal of Imaging Systems & Technology, 29(2), 180-190. [24] Wang, T. , & Guo, J. . (2018). Design and implementation of robot precise grasp based on image processing. Manufacturing Technology & Machine Tool, 24, 1012–1017. [25] Sheu, J. S. , & Tsai, W. H. . (2017). Implementation of a following wheel robot featuring stereoscopic vision. Multimedia Tools and Applications, 76(23), 25161-25177. [26] Horssen, E. V. , Hooijdonk, J. V. , Antunes, D. , &Heemels, W. . (2019). Event- and deadline-driven control of a self-localizing robot with vision-induced delays. IEEE Transactions on Industrial Electronics, 9(5), 54-63. [27] Narayanan, K.L., Krishnan, R.S., Son, L.H., Tung, N.T., Julie, E.G., Robinson, Y.H., Kumar, R. and Gerogiannis, V.C., (2022). Fuzzy guided autonomous nursing robot through wireless beacon network. Multimedia tools and applications, pp.1-29. [28] Dong, W., Yang, L. and Fortino, G., (2020). Stretchable human machine interface based on smart glove embedded with PDMS-CB strain sensors. IEEE Sensors Journal, 20(14), pp.8073-8081. [29] von Haxthausen F, Böttger S, Wulff D, Hagenah J, García-Vázquez V, Ipsen S (2021). Medical robotics for ultrasound imaging: current systems and future trends. Current robotics reports.2:55-71. [30] Cañas, J.M., Perdices, E., García-Pérez, L. and Fernández-Conde, J., (2020). A ROS-based open tool for intelligent robotics education. Applied Sciences, 10(21), p.7419. [31] Park, K., Chae, M. and Cho, J.H., 2021. Image pre-processing method of machine learning for edge detection with image signal processor enhancement. Micromachines, 12(1), p.73.