Research article

Estimating emissions from open-burning of uncollected municipal solid waste in Nigeria

  • Open-burning of municipal solid waste (MSW) is very common in Nigeria. Hence, this work estimated the emissions (greenhouse gases and others) from open-burning of uncollected MSW in Nigeria. The parameters (secondary data) used for the estimations were obtained from pertinent literature of MSW generation rate in Nigeria, level of uncollected MSW subjected to burning in Nigeria, oxidation/burning efficiency and others, 80.6% of wastes generated in Nigeria are combustibles. The National Bureau of Statistics showed that 52% of Nigerians lives in urban areas in the year 2020. With an annual mean growth rate of 2.62% between 2006–2020 (World Bank data), the urban population of Nigeria was estimated at 104, 885, 855 in 2020. The estimation for the year 2020 shows that the MSW generated by the urban population of Nigeria ranges from 16.8–25.3 million tons. With burning/oxidation efficiency (η) of 0.58, between 2.4–3.7 million tons of the uncollected wastes are open-burned. This represents 14.7% of the total MSW generated in Nigeria for the year. IPCC guidelines show that only fossil-carbon wastes are climate-relevant for CO2 emissions. Our estimation shows that 14.3% of the MSW generated in Nigeria contain fossil carbon. The total emissions for the three GHGs–carbon dioxide, methane and nitrogen oxides were between 798 to 1, 197 kilotons of CO2-eq per year. Other emissions associated with open-burning of MSW was also estimated using their default emission factor. The findings suggest the urgent need for the country to transition to proper waste management system, which will include improved collection and disposal to sanitary landfills, to protect public health and the environment.

    Citation: Chukwuebuka C. Okafor, Juliet C. Ibekwe, Chinelo A. Nzekwe, Charles C. Ajaero, Chiadika M. Ikeotuonye. Estimating emissions from open-burning of uncollected municipal solid waste in Nigeria[J]. AIMS Environmental Science, 2022, 9(2): 140-160. doi: 10.3934/environsci.2022011

    Related Papers:

    [1] Kun Han, Feng Jiang, Haiqi Zhu, Mengxuan Shao, Ruyu Yan . Learning cooperative strategies in StarCraft through role-based monotonic value function factorization. Electronic Research Archive, 2024, 32(2): 779-798. doi: 10.3934/era.2024037
    [2] Gaosong Shi, Qinghai Zhao, Jirong Wang, Xin Dong . Research on reinforcement learning based on PPO algorithm for human-machine intervention in autonomous driving. Electronic Research Archive, 2024, 32(4): 2424-2446. doi: 10.3934/era.2024111
    [3] Shi Wang, Sheng Li, Hang Yu . A power generation accumulation-based adaptive chaotic differential evolution algorithm for wind turbine placement problems. Electronic Research Archive, 2024, 32(7): 4659-4683. doi: 10.3934/era.2024212
    [4] Xiaochen Mao, Weijie Ding, Xiangyu Zhou, Song Wang, Xingyong Li . Complexity in time-delay networks of multiple interacting neural groups. Electronic Research Archive, 2021, 29(5): 2973-2985. doi: 10.3934/era.2021022
    [5] Longchao Da, Hua Wei . CrowdGAIL: A spatiotemporal aware method for agent navigation. Electronic Research Archive, 2023, 31(2): 1134-1146. doi: 10.3934/era.2023057
    [6] Yi Gong . Consensus control of multi-agent systems with delays. Electronic Research Archive, 2024, 32(8): 4887-4904. doi: 10.3934/era.2024224
    [7] Yu-Jing Shi, Yan Ma . Finite/fixed-time synchronization for complex networks via quantized adaptive control. Electronic Research Archive, 2021, 29(2): 2047-2061. doi: 10.3934/era.2020104
    [8] Ariel Leslie, Jianzhong Su . Modeling and simulation of a network of neurons regarding Glucose Transporter Deficiency induced epileptic seizures. Electronic Research Archive, 2022, 30(5): 1813-1835. doi: 10.3934/era.2022092
    [9] Agustín Moreno Cañadas, Pedro Fernando Fernández Espinosa, José Gregorio Rodríguez-Nieto, Odette M Mendez, Ricardo Hugo Arteaga-Bastidas . Extended Brauer analysis of some Dynkin and Euclidean diagrams. Electronic Research Archive, 2024, 32(10): 5752-5782. doi: 10.3934/era.2024266
    [10] Majed Alowaidi, Sunil Kumar Sharma, Abdullah AlEnizi, Shivam Bhardwaj . Integrating artificial intelligence in cyber security for cyber-physical systems. Electronic Research Archive, 2023, 31(4): 1876-1896. doi: 10.3934/era.2023097
  • Open-burning of municipal solid waste (MSW) is very common in Nigeria. Hence, this work estimated the emissions (greenhouse gases and others) from open-burning of uncollected MSW in Nigeria. The parameters (secondary data) used for the estimations were obtained from pertinent literature of MSW generation rate in Nigeria, level of uncollected MSW subjected to burning in Nigeria, oxidation/burning efficiency and others, 80.6% of wastes generated in Nigeria are combustibles. The National Bureau of Statistics showed that 52% of Nigerians lives in urban areas in the year 2020. With an annual mean growth rate of 2.62% between 2006–2020 (World Bank data), the urban population of Nigeria was estimated at 104, 885, 855 in 2020. The estimation for the year 2020 shows that the MSW generated by the urban population of Nigeria ranges from 16.8–25.3 million tons. With burning/oxidation efficiency (η) of 0.58, between 2.4–3.7 million tons of the uncollected wastes are open-burned. This represents 14.7% of the total MSW generated in Nigeria for the year. IPCC guidelines show that only fossil-carbon wastes are climate-relevant for CO2 emissions. Our estimation shows that 14.3% of the MSW generated in Nigeria contain fossil carbon. The total emissions for the three GHGs–carbon dioxide, methane and nitrogen oxides were between 798 to 1, 197 kilotons of CO2-eq per year. Other emissions associated with open-burning of MSW was also estimated using their default emission factor. The findings suggest the urgent need for the country to transition to proper waste management system, which will include improved collection and disposal to sanitary landfills, to protect public health and the environment.



    In the past two decades, digital watermarking technology has played an increasingly important role in the field of information security. By embedding specific watermarks into digital works such as images [1,2], audio [3,4] or video [5], it can achieve the purposes of copyright tracking, integrity protection, content authentication, medical security and so on.

    With the wide application of audio on the Internet, people are paying more and more attention to the copyright protection of audio, which attracts many scholars to research audio watermarking technology. Salah et al. [6] presented an audio watermarking algorithm by using a discrete Fourier transform, which has high transparency but poor robustness. Bhat et al. [7] proposed an audio watermarking algorithm based on a discrete cosine transform (DCT). The algorithm used singular value decomposition to achieve blind watermark extraction, and it had strong robustness to some signal processing operations, but its payload capacity was low. Hu and Hsu [8] proposed a sufficient audio watermarking algorithm in the discrete wavelet transform domain by applying the spectrum shaping technology into vector modulation. Authors claimed the payload capacity reached 818.26 bits per second (bps). Hwang et al. [9] designed an audio watermarking algorithm with singular value decomposition and quantization index modulation in order to reach blind extraction. This algorithm applied singular value decomposition on the stereo signal to achieve strong robustness against amplitude scaling, MP3 compression and resampling, but its transparency was low. Merrad [10] developed a robust audio watermarking algorithm based on the strong correlation between two continuous samples in a hybrid domain that consisted of a discrete wavelet transform and DCT. With the increasingly widespread application of audio watermarking algorithms, people have put forward higher and higher requirements for the performance of the algorithm. How to resist malicious attacks on audio has always been a challenging issue in the research of audio watermarking algorithms. Yamni et al. [11] proposed a blind and robust audio watermarking algorithm by combining the discrete Tchebichef moment transform, the chaotic system of the mixed linear–nonlinear coupled map lattices and a discrete wavelet transform. This algorithm achieved good results in terms of robustness and payload capacity, but no experimental results against synchronous attacks were found. A robust and blind audio watermarking scheme based on the dual tree complex wavelet transform and the fractional Charlier moment transform was proposed in paper [12]. It also obtained high imperceptibility and robustness against most common audio processing operations. Synchronous attacks may seriously destroy the structure of audio data in the embedding process, which will make the extracting algorithm unable to accurately search the location of a watermark in the carried audio [13,14]. Therefore, how to resist synchronization attacks is the bottleneck in improving the robustness of algorithms [15]. A robust audio watermarking algorithm for overcoming synchronous attacks was proposed in paper [16]. This algorithm took the audio frame sequence number as a global feature to carry the watermark, and it could resist partial synchronization attacks. Hu et al. [17] explored the distributive feature of the approximate coefficients to develop an audio watermarking algorithm with a self-synchronization mechanism in a discrete wavelet transform. This algorithm reconstructed and reshaped the wavelet coefficients for tracking the locations of the watermark. It had strong robustness to attacks, but its transparency only was a low level. An audio watermarking algorithm for resisting during de-synchronization and recapturing attacks was developed in a previous paper [18]. In this algorithm, the logarithmic mean feature was constructed to design the embedding and extracting algorithm according to the residuals of the two sets of features. He et al. [19] proposed a novel audio watermarking by embedding watermarks into the frequency domain power spectrum feature to resist recapturing attacks. From the analysis of the above literatures, it can be seen that embedding a watermark on some stable features can effectively improve the robustness of the algorithm. The main reason is that these features will not change much due to the stable performance when the audio is attacked, so the embedded watermark will not be easily lost.

    The performance of an audio watermarking algorithm is not only related to the embedding and extracting rules, but it is also related to the setting of algorithm parameters, so how to choose the parameters in the application is particularly important. When different applications put forward new requirements for payload capacity, transparency and robustness, the watermarking algorithm usually cannot accurately adjust its parameters to meet these performance requirements. Nowadays, parameters of most audio watermarking algorithms are chosen by the users according to their experience in application, or are adjusted by the designers according to the performance achieved by the algorithm in experiments. These methods lack an effective parameter adjustment mechanism and cannot effectively stimulate the performance of the algorithm. Robustness, transparency, and payload capacity are three important indicators of audio watermarking algorithms, and these indicators are determined by multiple algorithm parameters. Therefore, how to set these parameters so that all three indicators can meet performance requirements is a multi-parameter and multi-objective combinatorial optimization problem.

    In order to solve the above problems, some scholars have used meta-heuristic algorithms to optimize the parameters of watermarking algorithms. Meta-heuristic algorithms are self-organized and decentralized algorithms used for solving complex problems using team intelligence [20]. Wu et al. [21] proposed an audio watermarking algorithm based on a genetic algorithm for parameter optimization. This algorithm had high transparency and a large payload capacity, but it was not robust against attacks due to the lack of a synchronization mechanism. Kaur et al. [22] also proposed an audio watermarking method with a genetic algorithm which was used to find the optimal number of audio samples needed to conceal the watermark. Some scholars have attempted to apply sine and cosine algorithms to the design of image watermarking algorithms [23,24]. With the deepening of the research on watermarking technology, more and more watermarking algorithms based on meta-heuristic algorithms were explored. They all play a positive role in improving the performance of watermarking algorithms, but there are still many problems to be solved in the practical application.

    Based on the above analysis, weak robustness and a multi-parameter optimization problem are still urgent issues in the current research and application of audio watermarking algorithms. In our research, an adaptive and blind audio watermarking algorithm based on dither modulation and a BOA is proposed. The main contributions are as follows.

    1) We propose a robust and blind audio watermarking algorithm based on convolution and dither modulation. A stable feature is designed using convolution operations, and dither modulation is performed on this feature to design embedding and extracting algorithms. The stability of this feature improves the robustness of the algorithm to prevent watermark loss. The algorithm has the capability of blind extraction, and the watermark can be extracted only by comparing the feature value and quantized value, which will be very convenient for the algorithm to be applied in practice.

    2) We propose a method for setting the parameters to solve the multi-parameter and multi-objective problem of audio watermarking algorithms, which can adaptively adjust the algorithm parameters with the performance requirements. The BOA is used to optimize the key parameters of the algorithm which can be adaptively matched for the performance requirements by coding the population and constructing the fitness function. In the case of meeting the performance requirements of transparency and payload capacity, the fitness function of the BOA is constructed by the total bit error ratio (BER), which is a comprehensive evaluation of the watermark extracted from the carried audio after it has been subjected to multiple malicious attacks. Through global search and local search, the population is continuously optimized to search for the global optimal butterflies, so as to improve the robustness under specific performance requirements.

    In this section, the embedding and extracting principle of the proposed algorithm will be described in detail. A feature which is closely linked to the change of the intermediate frequency coefficient is designed by convolving the low frequency coefficient and the intermediate frequency coefficient. When embedding the watermark, the feature will be quantized by dither modulation, and the direction of dither modulation is controlled by the value of a binary watermark. When extracting the watermark, the feature will be calculated and uniformly quantized, and the binary watermark will be obtained by comparing the feature value and the quantized value.

    Based on the energy concentration characteristics of the DCT and the bidirectional quantization characteristics of dither modulation [25,26], a feature is explored to carry the watermark in the DCT domain, and then the binary watermark can be embedded into the audio by modifying the feature with dither modulation.

    The original audio with N sample-points can be supposed as x(n) (1nN). The binary watermark W that will be embedded into the audio can be expressed as the formula (1).

    W={win(l,m),1lL1,1mL2} (1)

    where win(l,m){0,1}. Divide x(n) into L1 audio fragments, and use the synchronization mechanism proposed in a previous paper [27] to select the voiced frame with the highest energy xl(n0) (1n0N1) with N1 sample-points from each audio fragment to carry the watermark.xl(n0) will be processed by the DCT using the formulas (2) and (3).

    Xl(0)=1N1N1n0=0xl(n0),k=0 (2)
    Xl(k)=2N1N1n0=0xl(n0)cos(2n0+1)kπ2N1,k0 (3)

    where Xl(0) is the component with a frequency of 0 Hz, and Xl(k) is the harmonic component with fk Hz. fk is the frequency of each harmonic component, calculated by using the formula (4), and fs is the sampling-rate.

    fk=kfs2N1(k0) (4)

    Assumed that Xl0(k) and Xlm(k) respectively represent the low frequency-band and intermediate frequency-band containing N2 spectral lines from Xl(k). r0 and r1 are the positions of the first spectral line of Xl0(k) and Xl1(k) in Xl(k). The watermark is embedded into audio fragments by modifying Xlm(k), and the carried frequency-band Xlm(k) which carries the L2 bit watermark can be represented by the formula (5), where ρm is a constant, indicating the change proportion of the intermediate frequency coefficients Xlm(k).

    Xlm(k)=ρmXlm(k) (5)

    The feature CFlm shown in the formula (6) can be used to represent the change of the intermediate frequency-band relative to the low frequency-band.

    CFlm=Xl0(k)Xlm(k)/2N21|Xl0(k)|2/N2 (6)

    where represents the convolution operation on Xl0(k) and Xlm(k). The numerator of this formula refers to the average value of the convolution result, and the denominator means the average value of the square of the magnitude of Xl0(k). Quantize CFlm at an equal interval δ, and the quantized value CFQlm can be shown in the formula (7).

    CFQlm=round(CFlmδ) (7)

    round() means that the data point in the brackets is equal to its nearest integer. Modulate win(l,m) into a bipolar bitstream w(l,m) according to the formula (8).

    w(l,m)={1win(l,m)=11win(l,m)=0 (8)

    The embedding rule for embedding L2 bits watermark into xl(n0) can be expressed as the formula (9).

    CFlm=δCFQlm+δw(l,m)4 (9)

    According to the formulas (5) and (6), the carried feature CFlm can also be showed in the formula (10).

    CFlm=Xl0(k)Xlm(k)/2N21|Xl0(k)|2/N2=ρmCFlm (10)

    It can be seen that CFlm changes in equal proportion similar to the change of Xlm(k), so Xlm(k) can be changed by modifying CFlm in order to embed L2 bits watermark into the audio fragment xl(n). The change proportion ρm can be expressed as the formula (11).

    ρm=CFlmCFlm=Xlm(k)Xlm(k)=δCFQlm+δw(l,m)4N2Xl0(k)xlm(k)2N21|Xl0(k)|2 (11)

    Therefore, watermarks can be concealed into an audio fragment by modifying the intermediate frequency-band coefficients Xlm(k), and the change proportion ρm can be calculated according to the formula (11).

    Figure 1 shows the flow diagram of the embedding algorithm, and the embedding steps can be described as follows in detail.

    Figure 1.  Flow diagram of the embedding algorithm.

    Step 1: Convert the watermark into a binary-string win(l,m) and modulate it to obtain a bipolar bit-stream w(l,m).

    Step 2: Divide x(n) into L1 fragments to obtain xl(n0).

    Step 3: Apply a DCT to xl(n0) to obtain the DCT coefficients Xl(k).

    Step 4: Select Xl0(k) and Xlm(k) from Xl(k).

    Step 5: Calculate CFlm according to the formulas (6).

    Step 6: Quantize CFlm to get CFQlm according to the formulas (7).

    Step 7: Embed L2 bits watermark into xl(n0), and get the carried feature CFlm according to the formulas (9).

    Step 8: Calculate ρm according to the formulas (11).

    Step 9: Calculate the carried frequency-band Xlm(k) according to the formulas (5), and Substitute Xlm(k) to obtain the carried spectrum Xl(k).

    Step10: Obtain the carried audio fragment xl(n0) by applying an inverse DCT to Xl(k).

    Step 11: Repeat step 3 to step 10 until all bits of the watermark are concealed into the audio.

    Step 12: Reconstruct all xl(n0) to obtain the carried audio x(n).

    According to the embedding principle described in Section 2.1, the binary watermark can be concealed into the audio by applying dither modulation to the feature. In the extracting process, the feature will also be quantized at the same interval as the embedding process, and then the binary watermark can be extracted without the original audio by comparing the feature value with the quantized value.

    Divide the carried audio x(n) to get L1 audio fragments xl(n0) which will be applied in the DCT to obtain Xl(k). Calculate CFlm with the formula (6), and quantize CFlm at δ to obtain CFQlm with the formula (7). The quantized value CFlm can be calculated with the formula (12).

    CFlm=δCFQlm (12)

    The extracting rule for obtaining L2 bits watermark wout(l,m) from xl(n0) can be expressed as the formula (13).

    wout(l,m)={1CFlmCFlm0CFlm>CFlm (13)

    Figure 2 shows the flow diagram of the extracting algorithm, and the extracting steps can be described as follows in detail.

    Figure 2.  Flow diagram of the extracting algorithm.

    Step 1: Divide the carried audio x(n) into L1 audio fragments to obtain xl(n0).

    Step 2: Apply a DCT to xl(n0) to obtain the DCT coefficients Xl(k).

    Step 3: Select Xl0(k) and Xlm(k) from Xl(k).

    Step 4: Calculate CFlm with the formulas (6).

    Step 5: Quantize CFlm to get CFQlm with the formulas (7).

    Step 6: Calculate CFlm with the formula (12).

    Step 7: Extract the L2 bits watermark from xl(n0) with the formula (13).

    Step 8: Repeat step 2 to step 7 until all bits of the watermark are extracted.

    In order to stimulate the performance in different applications, the parameters of the algorithm must be set adaptively to meet the different performance requirements. The BOA is a new nature-inspired optimization algorithm developed in 2019. It can be used to solve the global optimization problem by imitating the food-searching and mating behavior of butterflies, and it has the advantages of fast convergence and strong searching ability [28]. There are four important key parameters (r0, r1, N2, δ) in the proposed algorithm, which have a significant impact on the overall performance of the algorithm.

    It is assumed that the initial population POP has M butterflies, and the position of each butterfly consists of four key parameters, as shown in the formula (14).

    POP=[B1BiBM]=[r01r11N21r0ir1iN2ir0Mr1MN2Mδ1δiδM] (14)

    where Bi=(r0ir1iN2iδi) (1iM) represents the ith butterfly, and r0i, r1i, N2i, δi mean that they take random values on their respective ranges [Min(r0), Max(r0)], [Min(r1), Max(r1)], [Min(N2), Max(N2)] and [Min(δ), Max(δ)]. Min() and Max() represent the minimum and maximum values of the variables in brackets respectively. Each butterfly emits a certain intensity of fragrance fi, which can be expressed in the formula (15).

    fi=cIi (15)

    where c is the perceptual form, is the power index, and I is the stimulus factor. Normally, c and are constants, and Ii is related to the fitness function of this butterfly. Fitness function Fiti comprehensively considers three indicators, including payload capability, transparency and robustness under various attacks in the proposed algorithm, as shown in the formula (16).

    Fiti=1Ii=Aj=1ajBERj,1jA (16)

    The boundary conditions of the above formula are SNR>SNR0 and Cap>Cap0, where SNR is the signal-to-noise ratio (SNR), as expressed as the formula (17). Cap is the payload capacity of this algorithm. SNR0 and Cap0 respectively indicate the thresholds of transparency and payload capacity that need to be provided. A indicates the total number of attacks, and BERj means the BER of the extracted watermark after applying the jth attack on the carried audio, as expressed in the formula (18). aj indicates the importance of the jth attack in total attack types, and Aj=1aj=1.

    SNR=10lg(Nn=1x2(n)Nn=1(x(n)x(n))2) (17)
    BER=L1l=1L2m=1win (l,m)wout (l,m)L1L2×100% (18)

    A butterfly can conduct a random local search near its self-position, or they can move towards the butterfly with the highest fragrance value and conduct a global search. Assume that there is a switch probability p. When there is a need to update the position of the butterfly Bti in the tth iteration, a random number r is generated. If rp, then the butterfly performs a local search, and its new position Bt+1i will be updated according to the formula (19).

    Bt+1i=Bti+(r2×Bti0Bti1)×fi,1i0,i1M (19)

    where Bti0 and Bti1 represent the positions of the i0th butterfly and the i1th butterfly in the tth iteration. Else, the butterfly will perform a global search, and its new position Bt+1i will be updated according to the formula (20).

    Bt+1i=Bti+(r2×gBti)×fi (20)

    where g represents the position of the best butterfly with the highest fragrance value in the tth iteration. The optimization process can be described as follows in detail.

    Step 1: Initialize the population and parameters. Set the perceptual form c, the power index , the switch probability p, the population size M, the maximum number of iterations MaxG, SNR0 and Cap0, and then produce an initial population POP0.

    Step 2: Put four parameters from each butterfly into the embedding algorithm in order to get the carried audio, and then calculate SNR with the formula (17).

    Step 3: Select all qualified butterflies with performance that meets the boundary conditions, and run the embedding algorithm to get the carried audio.

    Step 4: Perform attack. Apply malicious attacks to the carried audio respectively, and then carry out the extracting algorithm to calculate BERj with the formula (18).

    Step 5: Calculate Fiti with the formula (16) to obtain the best butterfly in the current population.

    Step 6: Calculate fi of each butterfly with the formula (15).

    Step 7: Generate r and compare it with p. If rp, update the position according to the formula (19); else, update the position with the formula (20).

    Step 8: Repeat Step 2 to Step 7 until the maximum number of iterations reaches MaxG or the same global best butterfly occurs in five consecutive iterations.

    This section will evaluate the performance of the proposed algorithm in terms of payload capacity, transparency, robustness and complexity. Transparency is measured using the SNR and the object difference grade (ODG) which is the key output of the perceptual evaluation of audio quality. In addition, the transparency can be evaluated by observing the audio changes before and after embedding the watermark from the waveform and spectrogram. Robustness can be evaluated with the BER, normalized correlation (NC) which can be expressed as the formula (21) and structural similarity (SSIM) proposed by the laboratory for image and video engineering of the university of Texas at Austin to reflect the similarity between the extracted watermark and the original watermark. If the extracted watermark is very similar to the original watermark, NC and SSIM all will be very close to 1, which indicates that the robustness is strong. Complexity can be measured by the elapsed time consumed by the embedding algorithm and the extracting algorithm.

    NC=L1l=1L2m=1win(l,m)wout(l,m)L1l=1L2m=1win(l,m)2L1l=1L2m=1wout(l,m)2 (21)

    Here, we will list the experimental parameters and conditions in our test: 1) Algorithm parameters: M=50, c=0.1, ∝=0.1, p=0.8, MaxG = 500, N1=4096, aj=0.1, (j=1,2,10), Min(r0) = 1, Max(r0) = 100, Min(r1) = 100, Max(r1) = 1000, Min(N2) = 1, Max(N2) = 20, Min(δ) = 0, Max(δ) = 2; 2) Twenty 64-second audio signals which come from the TIMIT standard database including popular and symphony music were tested, and they were formatted by WAV, sampled at 44,100 Hz and quantized at 16 bits; 3)There were two groups of experiments according to the different watermarks. The first watermark was a binary image shown as Figure 3(a) with the size of 43 × 64, Cap0=40bps, and SNR0=27dB; The second watermark is shown as Figure 3(b) with the size of 86 × 64, Cap0=80bps and SNR0=26dB; 4) Computer system: 64-bit Microsoft Windows 10; 5) Programming language: Matlab 2016R.

    Figure 3.  Two watermarks: (a) The first image with 43 × 64; (b) The second image with 86 × 64.

    Payload capacity refers to the bit number of the watermark that can be contained in audio per second. In our study, the payload capacity is related to the size of the watermark and the duration T of the audio, so it can be calculated by the formula (22). The duration T of the audio was about 64 seconds, and the size of the first watermark was 43 × 64 bits, so the pay-load capacity in the first group was 43 bps. Similarly, the payload capacity in the second group was 86 bps.

    Cap=L1L2T (22)

    The average experimental results for the SNR (dB), ODG, BER (%), NC, SSIM and Cap (bps) are listed in Table 1. "Yes" in Table 1 indicates the watermarking algorithm with the BOA. "No" indicates the watermarking algorithm without the BOA, and its key parameters (r0, r1, N2, δ) were set as (20,600,5,0.4).

    Table 1.  Average results under no attack.
    Item 1st group 2nd group Paper [9] Paper [13] Paper [17] Paper [21]
    Yes No Yes No
    SNR 27 24 26 23 25 31 19 26
    ODG −0.75 −0.85 −1.02 −0.98 −0.81 −0.08 −3.24 −1.18
    BER 0.00 0.12 0.05 0.16 0.06 0.00 0.00 0.00
    NC 1 0.98 0.99 0.98 0.99 0 1 0
    SSIM 1 1 1 1 1 1 1 1
    Cap 43 43 86 86 43 43 86 86

     | Show Table
    DownLoad: CSV

    According to the standards of the international federation of the phonographic industry (IFPI) for audio watermarking algorithms, the SNR should be more than 20 dB and payload capacity should be greater than 20 bps. It can be seen from the data of two groups in Table 1 that the average SNR values with the BOA were 27 dB and 26 dB, while the average SNR values without the BOA were 24 dB and 23 dB, which indicates that the proposed algorithm meets the standards of the IFPI in terms of transparency and payload capacity, and the proposed algorithm achieved good transparency under the payload capacities of 43 bps and 86 bps. Compared with other algorithms with the same payload capacity, the transparency of this proposed algorithm was the same as that of the algorithms in a previous study [21], far superior to the algorithm in [9] and [17], but inferior to the algorithm in [13].

    Figure 4 shows the waveform comparison of the original audio and the carried audio respectively. In order to display the details of the audio more clearly, only a 5-second audio clip is shown here. Their spectrograms under different payload capacities are shown in Figure 5. It can be seen that the waveforms and spectrograms of the original audio and the carried audio with different watermarks all have no visible changes, which also indicates that the transparency of this algorithm is high. The main reasons are as follows. Firstly, the watermark is only embedded in the intermediate frequency coefficients, and the location of the watermark can be adjusted by optimizing the key parameters. Second, the algorithm only modifies the DCT coefficient by dither modulation, so the audio data are less damaged. The frequency range with watermarks can be calculated according to the formula (4).

    Figure 4.  Waveform comparison. (a) Original audio. (b) Carried audio with the first watermark. (c) Carried audio with the second watermark.
    Figure 5.  Spectrogram comparison. (a) original audio. (b) Carried audio with the first watermark. (c) Carried audio with the second watermark.

    Table 1 also shows the robustness results under no attack. It can be seen that all algorithms can perfectly extract watermarks from the carried audio without any attacks. The robustness against malicious attacks will be discussed in this section. Two watermarks with different sizes are embedded into the audio respectively, and then different attacks are performed on the carried audio. In the case of meeting the transparency requirements, the BOA is used to adaptively select the algorithm parameters that minimize the fitness function according to the formula (16), so that the algorithm can achieve the strongest robustness against these attacks. Attack types can be shown as follows.

    A. Noise addition: Add Gaussian noise with 30 dB into the carried audio.

    B. Echo addition: Add an echo with a delay of 50 ms into the carried audio.

    C. MP3 compression: Apply MPEG-1 layer 3 compression at a bit rate of 128 kbps.

    D. Low-pass filtering: Apply a low-pass filter with a cutoff frequency of 12 kHz.

    E. Re-quantization: Re-quantize the carried audio with 8 bits per sample, and back into 16 bits per sample.

    F. Re-sampling: Re-sample the carried audio with 22.05 kHz and back into 44.1 kHz.

    G. Amplitude scaling: Scale the amplitude at a factor of 0.8.

    H. Time scale modification (TSM): Apply TSM with 1% on the carried audio.

    I. Jittering: Randomly delete one audio sample from every 1000 samples in the carried audio.

    J. Random cropping: Randomly cut out 100 samples from the carried audio.

    The above attacks were applied to the carried audio one by one. The average results of the BER (%) are listed in Table 2. The extracted watermarks corresponding to the global best butterfly, NC and SSIM are shown in Figures 6 and 7.

    Table 2.  Robustness comparison with other algorithms.
    Item 1st group 2nd group Paper [9] Paper [13] Paper [17] Paper [21]
    Yes No Yes No
    A 0.00 0.32 0.78 1.02 11.96 0.49 0.02 1.25
    B 0.08 0.39 0.97 1.54 18.64 0.18 0.34 0.16
    C 0.53 0.86 0.82 1.41 19.97 0.24 0.01 0.18
    D 0.00 0.19 0.76 1.12 0.28 1.27 0.00 0.09
    E 0.00 0.62 0.72 1.21 0.76 1.89 0.01 0.25
    F 0.55 0.98 1.03 1.57 0.89 0.00 0.01 0.12
    G 0.00 0.16 0.46 0.88 0.33 0.05 0.01 0.08
    H 10.42 13.03 12.21 16.44 48.25 38.45 5.71 42.89
    I 1.64 2.69 2.53 3.87 25.19 28.42 1.78 32.59
    J 0.57 1.24 1.57 2.11 22.82 29.17 0.87 46.24

     | Show Table
    DownLoad: CSV
    Figure 6.  The first extracted watermarks. (a) Noise addition (30 dB). (b) Echo addition (50 s). (c) MP3 compression (128 kbps). (d) Low-pass filtering (12 kHz). (e) Re-quantization. (f) Re-sampling. (g) Amplitude scaling (0.8). (h) TSM (1%). (i) Jittering (1000). (j) Random cropping (100). (k) no attack.
    Figure 7.  The second extracted watermarks. (a) Noise addition (30 dB). (b) Echo addition (50 s). (c) MP3 compression (128 kbps). (d) Low-pass filtering (12 kHz). (e) Re-quantization. (f) Re-sampling. (g) Amplitude scaling (0.8). (h) TSM (1%). (i) Jittering (1000). (j) Random cropping (100). (k) no attack.

    From the experimental results of two groups in Table 2, it can be seen that the proposed algorithm with the BOA shows strong robustness under different payload capacities. After the payload capacity was doubled, the experimental results in the second group became larger than those in the first group, indicating that the robustness decreases as the payload capacity increases. In addition, the robustness of the algorithm with the BOA was stronger than the algorithm without the BOA, indicating that BOA is effective in improving the robustness by optimizing multiple key parameters.

    When the carried audio was subjected to noise addition at 30 dB, echo addition at 50ms, MP3 compression at 128 kbps, low-pass filtering at 12 kHz, re-quantization, re-sampling, amplitude scaling and random cropping, the proposed algorithm with the BOA showed particularly excellent robustness, which can be reflected by the following three points: 1) All BER values are very close to 0 in Table 2. 2) The extracted watermarks are very clear in Figures 6 and 7. 3) All NC and SSIM values are very close to 1 in Figures 6 and 7.

    The proposed algorithm with the BOA showed good robustness when a jittering attack was applied to the carried audio. The extracted watermark was very similar to the original watermark, as shown in Figure 6(i) and Figure 7(i). The BER values were 1.64% and 2.53% under two payload capacities, and NC values were higher than 0.96.

    Under TSM attack, the BER values in the two groups of experiments reached 10.42% and 12.21% respectively, indicating that the robustness of the proposed algorithm against TSM is weak. However, these results still meet IFPI, and the main information can be distinguished from the extracted images, as seen in the Figure 6(h) and Figure 7(h).

    From the comprehensive results of transparency, hiding capacity, and robustness, the proposed algorithm with the BOA has stronger robustness than those in [9] and [13] under the payload capacity with 43 bps when resisting most attacks. When the payload capacity reaches 86 bps, this proposed algorithm has higher transparency, but worse robustness against attacks than that in [17]. This is mainly because the SNR of the algorithm in [17] is only 19 dB, which does not meet the IFPI standard, so it traded for strong robustness by reducing transparency. The proposed algorithm with the BOA has the same payload capacity and transparency as that in [21], and it is more robust when resisting noise addition, amplitude scaling, TSM, jittering and random cropping. It can be viewed from the above analysis that the robustness and transparency of this algorithm are excellent under different payload capacities. This is mainly because of the following two reasons: 1) The feature designed by using convolution is relatively stable, which makes the watermark embedded in it also very stable and will not easily lost when the carried audio is attacked. 2) With the minimum total BER as the optimization goal, the BOA can adaptively search the most suitable key parameters according to the performance requirements, which makes the proposed algorithm have the strong robustness in resisting various attacks.

    Complexity is an important indicator for evaluating the performance of a watermarking algorithm. The lower the complexity, the less time it takes for the algorithm to embed and extract the watermark. Table 3 lists the average runtime (seconds) of the proposed algorithm and four related algorithms in embedding and extracting process.

    Table 3.  Complexity comparison with other algorithms (seconds).
    Item 1st group 2nd group Paper [9] Paper [13] Paper [17] Paper [21]
    Yes No Yes No
    Embed 856 1.80 1147 1.91 2.95 3.42 2.89 1526
    Extract 0.92 0.92 1.08 1.08 1.79 2.59 1.84 1.89

     | Show Table
    DownLoad: CSV

    According to the experimental results, when embedding watermark, the running time of the algorithm with the BOA is much higher than that of the algorithm without the BOA, mainly because the BOA needs to run the embedding program and extraction program repeatedly when optimizing the parameters of the watermark algorithm. The extracting time of the two groups is basically the same, which indicates that the algorithm with the BOA does not increase the complexity in the extracting process. Compared with papers [9,13,17], the proposed algorithm without the BOA has lower complexity due to its shorter running time. The algorithm proposed in [21] costs 1526 seconds to embed the watermark, which is much higher than that of our proposed algorithm with the BOA. The main reason is that the BOA is simpler than the genetic algorithm used in [21] and can quickly jump out of the local optimal solution.

    Based on the experimental results of the above four indicators, the following points can be summarized: 1) The algorithm has stronger robustness by embedding watermarks on the stable feature. 2) The algorithm can adaptively search for the optimal parameters to meet the requirements of transparency and payload capacity in practical applications, thereby improving the overall performance of the algorithm. 3) Under the same payload capacity and transparency, the algorithm with the BOA has stronger robustness than the algorithm without the BOA, but the BOA increases the complexity in the embedding process.

    An adaptive audio watermarking algorithm based on dither modulation and the BOA has been proposed to improve the poor robustness and optimize the key parameters of audio watermarking. Based on convolutional operation and dither modulation, a watermark will be embedded into the stable feature to prevent watermark loss. When extracting the watermark, a binary watermark can be extracted by comparing the feature value and the quantized value without the original audio, which is very convenient for practical application. In order to match the key parameters of the algorithm with the performance requirements in different applications, the BOA is used to optimize the key parameters of the algorithm. Under the condition of meeting the two indicators of payload capacity and transparency, a fitness function composed of the BER under various attacks is constructed. In the process of continuous iteration, the key parameters of the algorithm are adaptively optimized by searching for the position of the butterfly with the largest fragrance.

    Experimental results demonstrate that the proposed algorithm with the BOA has good transparency, strong robustness, and the ability to search for the optimal parameters. Our research provides a solution to the multi-parameter and multi-objective optimization problem formed between the parameters and performance of watermarking algorithms. The population coding method and the construction scheme for the fitness function can also provide an example for other meta heuristic algorithms to be applied for the parameter optimization of watermarking algorithms. Compared with other related watermarking algorithms, although the proposed algorithm has achieved better results in terms of robustness and overall performance improvement, it still has problems, such as high complexity and weak robustness in resisting TSM. In future research, we will further explore the methods to overcome TSM, reduce the complexity, and focus on using more intelligent optimization algorithms to improve the overall performance of the watermark algorithm.

    This work was funded by the High-Level Talent Scientific Research Foundation of Jinling Institute of Technology, China (Grant No. jit-b-201918), Industry-university-research Cooperation Project of Jiangsu Province in 2022 (BY2022654), National Natural Science Foundation of China, China (Grant No. 11601202), Collaborative Education Project of the Ministry of Education (Grant no.202102089002) and Jiangsu Provincial Vice President of Science and Technology Project in 2022 (Grant no. FZ20220114).

    The authors declare that there is no conflict of interest.



    [1] Ogwueleka TC (2009) Municipal solid waste characteristics and management in Nigeria. J Environ Health Sci Eng 6: 173-180.
    [2] Ayantoyinbo BB, Adepoju OO (2018) Analysis of solid waste management logistics and its attendant challenges in Lagos metropolis. Logistics 2: 11. https://doi.org/10.3390/logistics2020011 doi: 10.3390/logistics2020011
    [3] Aluko OO, Obafemi TH, Obiajunwa PO, et al (2021) Solid waste management and health hazards associated with residence around open dumpsites in heterogeneous urban settlements in Southwest Nigeria. Int J Environ Health Res 2021: 1-16. https://doi.org/10.1080/09603123.2021.1879738 doi: 10.1080/09603123.2021.1879738
    [4] Adekola PO, Iyalomhe FO, Paczoski A, et al, (2021) Public perception and awareness of waste management from Benin City. Sci Rep Nat 11: 1-14. https://doi.org/10.1038/s41598-020-79688-y doi: 10.1038/s41598-020-79688-y
    [5] Okedere OB, Olalekan AP, Fakinle BS, et al, (2019) Urban air pollution from the open burning of municipal solid waste. Environ Qual Manage 28: 67-74. https://doi.org/:10.1002/tqem.21633 doi: 10.1002/tqem.21633
    [6] Das B, Bhave PV, Byanju RM, et al. (2018) Estimating emissions from open burning of municipal solid waste in municipalities of Nepal. Waste Manage 79: 481-490. https://doi.org/10.1016/j.wasman.2018.08.013 doi: 10.1016/j.wasman.2018.08.013
    [7] dos Muchangos LS, Tokai A (2020) Greenhouse gas emission analysis of upgrading from an open dump to a semi-aerobic landfill in Mozambique-the case of Hulene dumpsite. Sci Afr 10: e00638. https://doi.org/10.1016/j.sciaf.2020.e00638
    [8] Kristanto GA, Koven W (2019) Estimating greenhouse gas emissions from municipal solid waste management in Depok, Indonesia. City Environ Interact 4: 100027. https://doi.org/10.1016/j.cacint.2020.100027 doi: 10.1016/j.cacint.2020.100027
    [9] Olukanni DO, Mnenga MU (2015) Municipal solid waste generation and characterization: A case study of Ota, Nigeria. Int J Environ Sci Toxicol 391: 1-8.
    [10] Adeniran AA, Adewole AA, Olofa SA (2014) Impact of solid waste management on Ado Ekiti property values. Civil and Environ Res 6: 29-35.
    [11] Okedere OB, Elehinafe FB, Oyelami S, et al, (2021) Drivers of anthropogenic air emissions in Nigeria-A review. Heliyon 7: e06398. https://doi.org/10.1016/j.heliyon.2021.e06398
    [12] Shrestha R.M, Oanh NTH, Shrestha RP, et al (2013) Atmospheric Brown Clouds (ABC) Emission Inventory Manual, United Nations Environment Programme, Nairobi, Kenya.
    [13] IPCC, Intergovernmental Panel on Climate Change (2006a). 2006 IPCC Guidelines for National Greenhouse Gas Inventories: Incineration and open burning of waste. https://www.ipcc-nggip.iges.or.jp/public/2006gl/pdf/5_Volume5/V5_5_Ch5_IOB.pdf
    [14] Cogut A (2016). Open burning of waste: A global health disaster. R20 Regions of Climate Action, 1-63.
    [15] Chen DM, Bodirsky BL, Krueger T, et al, (2020) The world's growing municipal solid waste: trends and impacts. Environ Res Lett 15: 074021. https://doi.org/10.1088/1748-9326/ab8659
    [16] IPCC, 2006b, Good Practice Guidance and Uncertainty Management in National Greenhouse Gas Inventories: Emissions from waste incineration, 455-468. https://www.ipcc-nggip.iges.or.jp/public/gp/bgp/5_3_Waste_Incineration.pdf
    [17] Verma R, Vinoda KS, Papireddy M, et al, (2016) Toxic pollutants from plastic waste- A review. Procedia Environ Sci 35: 701-708.
    [18] Valavanidid A, Iliopoulos N, Gotsis G, et al, (2008) Persistent free radicals, heavy metals and PAHs generated in particulate soot emissions and residual ash from controlled combustion of common type of plastics. J Hazard Mat 156: 277-284.
    [19] Kaza S, Yao L, Bhada-Tata P, et al, (2018) What a waste 2.0. A global snapshot of solid waste management to 2050. Urban Development Series. International Bank for Reconstruction and Development/The World Bank, Washington, DC: World Bank. https://doi.org/10.1596/978-1-4648-1329-0.
    [20] Ipeaiyeda AR, Falusi BA (2018) Monitoring of SO2, NOx and NH3 Emission from Burning of Solid Wastes at Awotan and Lapite Dumpsites, Ibadan, Nigeria. S Afr J Chem 71: 166-173. https://doi.org/10.17159/0379-4350/2018/v71a22 doi: 10.17159/0379-4350/2018/v71a22
    [21] Fakinle BS, Odekanle EL, Olalekan AP, et al (2020) Air pollutant emissions by anthropogenic combustion processes in Lagos, Nigeria. Cogent Eng 7: 1808285. https://doi.org/10.1080/23311916.2020.1808285 doi: 10.1080/23311916.2020.1808285
    [22] Dunne D, The carbon Brief Profile: Nigeria, 2020. Available https://www.carbonbrief.org/the-carbon-brief-profile-nigeria
    [23] Edenhofer O, Seyboth K (2013) Intergovernmental Panel on Climate Change (Eds) Encyclopedia of Energy, Natural Resource, and Environmental Economics. https://doi.org/10.1016/B978-0-12-375067-9.00128-5
    [24] Magazzino C, Mele M, Schneider N, et al, (2021) Waste generation, wealth and GHG emissions from the waste sector: Is Denmark on the path towards circular economy? Sci Total Environ 755: 142510.
    [25] Nnaji CC (2015) Status of municipal solid waste generation and disposal in Nigeria. Manage Environ Qual 26: 53-71.
    [26] Aliu IR, Adeyemi OE, Adebayo A (2014). Municipal household solid waste collection strategies in an African megacity: Analysis of public private partnership performance in Lagos. Waste Manage Res 32: 67-78. https://doi.org/10.1177/0734242X14544354 doi: 10.1177/0734242X14544354
    [27] Ibikunle RA, Titiladunayo IF, Akinnuli BO, et al (2019) Estimation of power generation from municipal solid wastes: A case study of Ilorin metropolis, Nigeria. Energy Rep 5: 126-135. https://doi.org/10.1016/j.egyr.2019.01.005 doi: 10.1016/j.egyr.2019.01.005
    [28] Ezeudu OB, Agunwamba JC, Ugochukwu UC, et al, (2020) Temporal assessment of municipal solid waste management in Nigeria: Prospects for circular economy adoption. Rev Environ Health 20200084.
    [29] Magazzino C, Mele M, Schneider N (2020) The relationship between municipal solid waste and greenhouse gas emissions: Evidence from Switzerland. Waste Manage 113: 508-520. https://doi.org/10.1016/j.wasman.2020.05.033 doi: 10.1016/j.wasman.2020.05.033
    [30] Somorin TO, Adesola S, Kolawole A (2017) State-level assessment of the waste-to-energy potential (via incineration) of municipal solid wastes in Nigeria. J Clean Prod 164: 804-815. https://doi.org/1016/j.jclepro.2017.06.228
    [31] Bichi MH, Amatobi DA (2013) Characterization of household solid waste generated in Sabon-Gari area of Kano in Northern Nigeria. Am J Res Commun 1: 165-171.
    [32] Nwude MO, Igboro SB, Otun JA, et al, (2014) Solid waste generation and characterization in Kaduna metropolis, Nigeria. Acad J Sci Eng 6: 31-39.
    [33] Ayuba KA, Manaf LA, Sabrina AH, et al, (2013) Current status of municipal solid waste management practice in FCT Abuja. Res J Environ Earth Sci 5: 295-304.
    [34] Iyamu HO, Anda M, Ho G (2017) Socio-technical systems analysis of waste to energy from municipal solid waste in developing economies: A case for Nigeria. Renew Energy Environ Sustain 2: 1-9. https://doi.org/10.1051/rees/2017027. doi: 10.1051/rees/2017027
    [35] Amarachukwu E, Evuti AM, Salam KA, et al. (2020) Determination of waste generation, composition and optimized collection route for University of Abuja main campus using "MyRouteOnline" software. Sci Afr 10: e00569. https://doi.org/10.1016/j.sciaf.2020.e00569
    [36] Ferronato N, Torretta V, (2019) Waste Mismanagement in Developing Countries: A Review of Global Issues. Int J Environ Res Public Health 16: 1060. https://doi.org/10.3390/ijerph16061060 doi: 10.3390/ijerph16061060
    [37] The World Bank, Population growth (annual %)-Nigeria, 2021. Available from: https://data.worldbank.org/indicator/SP.POP.GROW?locations=NG
    [38] Knoema, Nigeria-urban population as a share of total population. World Data Atlas > Nigeria > Demographics, 2021. Available from: https://knoema.com/atlas/Nigeria/Urban-population
    [39] The World Bank, GDP per capita, (Current US$)-Sub-Saharan Africa, 2021. Available from: https://data.worldbank.org/indicator/NY.GDP.PCAP.CD?locations=ZG
    [40] UNDP, United Nations Development Programme, National human development Report, 2018. Achieving human development in North East Nigeria. Available from: http://hdr.undp.org/sites/all/themes/hdr_theme/country-notes/NGA.pdf
    [41] Okafor CC, Madu CN, Ajaero CC, et al, (2021) Sustainable management of textile and clothing. Clean Tech Recycl 1: 70-87.
    [42] Orhorhoro EK, Oghoghorie O (2019) Review on solid waste generation and management in Sub-Saharan Africa: A case study of Nigeria. J Appl Sci. Environ Manage 23. https://doi.org/10.4314/jasem.v2319.19
    [43] Adeniran AE, Nubi AT, Adelopo AO (2017) Solid waste generation and characterization in the University of Lagos for a sustainable waste management. Waste Manage 67: 3-10. http://dx.doi.org/10.1016/j.wasman.2017.05.002 doi: 10.1016/j.wasman.2017.05.002
    [44] Ibikunle RA, Titiladunayo IF, Dahunsi SO, et al (2021) characterization and projection of dry season municipal solid waste for energy production in Ilorin metropolis, Nigeria. Waste Manage Res: The J Sustain Circ Econ 39: 1048-1057. https://doi.org/10.1177%2F0734242X20985599
    [45] Popoola O E, Popoola A O, Purchase D (2019). Level of awareness and concentrations of heavy metals in the blood of electronic waste scavengers in Nigeria. J Health Pollut 9: 1-10.
    [46] The World Bank, Total greenhouse gas emissions (kt of CO2 equivalent)-Nigeria, 2020. https://data.worldbank.org/indicator/EN.ATM.GHGT.KT.CE?locations=NG
    [47] Dave PN, Sahu LK, Tripathi N, et al (2020) Emissions of non-methane volatile organic compounds from a landfill site in a major city of India: Impact on local air quality. Heliyon 6: e04537. https://www.sciencedirect.com/science/article/pii/S2405844020313815
    [48] Fuzzi S, Baltensperger U, Carslaw K, et al. (2015) Particulate matter, air quality and climate: lessons learned and future needs. Atmos Chem Phys 15: 8219-8299. https://doi.org/10.5194/acp-15-8217-2015 doi: 10.5194/acp-15-8217-2015
    [49] Okafor C, Madu C, Ajaero C, et al, (2021) Situating circular economy and energy transition in an emerging economy. AIMS Energy 9: 651-675. https://doi.org/10.3934/energy.2021031 doi: 10.3934/energy.2021031
    [50] Kuo J, Dow J (2017) Biogas production from anaerobic digestion of food waste; and relevant air quality implications, J Air Waste Manage Assoc 67: 1000-1011, https://doi.org/10.1080/10962247.2017.1316326 doi: 10.1080/10962247.2017.1316326
  • This article has been cited by:

    1. Mohamed Yamni, Achraf Daoui, Hicham Karmouni, Mhamed Sayyouri, Hassan Qjidaa, Saad motahhir, Ouazzani Jamil, Walid El-Shafai, Abeer D. Algarni, Naglaa F. Soliman, Moustafa H. Aly, An efficient watermarking algorithm for digital audio data in security applications, 2023, 13, 2045-2322, 10.1038/s41598-023-45619-w
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4254) PDF downloads(225) Cited by(5)

Figures and Tables

Figures(6)  /  Tables(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog