?url_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Adc&rft.title=ANALISIS+WORD+PREDICTION+DENGAN+MENGGUNAKAN%0D%0ALANGUAGE+MODEL+Bidirection+Encoding+Representations+from%0D%0ATransfordmers+(BERT)+PADA+DATASET+KALIMAT+BAHASA+INDONESIA&rft.creator=AGHITA+NAMIRA+%2C+YULIZA+&rft.subject=000+Ilmu+komputer%2C+informasi+dan+pekerjaan+umum&rft.subject=500+ilmu+pengetahuan+alam+dan+matematika&rft.subject=600+Teknologi+(ilmu+terapan)&rft.description=Bahasa+Indonesia+sebagai+bahasa+nasional+memiliki+peran+penting+dalam+berbagai+bidang%2C+termasuk%0D%0Apengembangan+teknologi+pemrosesan+bahasa+alami+(Natural+Language+Processing).+Salah+satu%0D%0Apendekatan+modern+dalam+NLP+adalah+penggunaan+model+transformer-based+seperti+BERT%0D%0A(Bidirectional+Encodder+Representations+from+Transformers)+untuk+menyelesaikan+tugas+Masked%0D%0ALangugae+Modeling+(MLM)%2C+yaitu+menebak+token+yang+hilang+dalam+suatu+kalimat+berdasarkan%0D%0Akonteksnya.+Tujuan+penelitian+ini+adalah+untuk+mengevaluasi+kinerja+model+BERT+pada+kalimat%0D%0Abahasa+Indonesia+dengan+dataset+27.600+baris+kalimat+bahasa+Indonesia.+Model+dilatih+dengan+dua%0D%0Askema%2C+yaitu+tanpa+augmentasi+(skema+1)+dan+dengan+teknik+augmentasi+data+(skema+2).+Hasil%0D%0Aevaluasi+menunjukkan+bahwa+skema+2+memberikan+kinerja+yang+lebih+baik%2C+dengan+akurasi+sebesar%0D%0A42%2C1%25+(top-1)%2C+53%2C7%25+(top-3)%2C+dan+58%2C1%25+(top-5)%2C+dibandingkan+dengan+skema+1+yang+menghasilkan%0D%0Aakurasi+29%25+(top-1)%2C+42%2C6%25+(top-3)%2C+dan+52%2C6%25+(top-5).+Peningkatan+ini+menunjukkan+bahwa%0D%0Apenggunaan+augmentasi+data+dapat+meningkatkan+variasi+kalimat+dalam+pelatihan+model%2C%0D%0Akemampuan+prediktif+model+terhadap+kata-kata+yang+dimasking+dapat+ditingkatkan.%0D%0AKata+kunci%3A+BERT%2C+Masked+Language+Modelling%2C+Bahasa+Indonesia%2C+NLP%3B%0D%0A%0D%0AIndonesian%2C+as+the+national+language%2C+plays+a+crucial+role+in+various+fields%2C%0D%0Aincluding+the+development+of+Natural+Language+Processing+(NLP)+technologies.%0D%0AOne+modern+approach+in+NLP+is+the+use+of+transformer-based+models+such+as+BERT%0D%0A(Bidirectional+Encoder+Representations+from+Transformers)+to+perform+Masked%0D%0ALanguage+Modeling+(MLM)%2C+which+involves+predicting+missing+tokens+in+a+sentence%0D%0Abased+on+context.+This+study+aims+to+evaluate+the+performance+of+the+BERT+model%0D%0Aon+Indonesian+sentences+using+a+dataset+of+27%2C600+Indonesian+sentence+entries.+The%0D%0Amodel+was+trained+using+two+schemes%3A+without+augmentation+(Scheme+1)+and+with%0D%0Adata+augmentation+techniques+(Scheme+2).+Evaluation+results+show+that+Scheme+2%0D%0Aprovides+better+performance%2C+with+an+accuracy+of+42.1%25+(top-1)%2C+53.7%25+(top-3)%2C%0D%0Aand+58.1%25+(top-5)%2C+compared+to+Scheme+1+which+achieved+an+accuracy+of+29%25%0D%0A(top-1)%2C+42.6%25+(top-3)%2C+and+52.6%25+(top-5).+This+improvement+indicates+that+data%0D%0Aaugmentation+can+enhance+the+diversity+of+training+sentences%2C+thereby+improving%0D%0Athe+model's+predictive+capability+for+masked+words.%0D%0AKeywords%3A+BERT%2C+Masked+Language+Modeling%2C+Indonesian+Language%2C+NLP&rft.publisher=FAKULTAS+MATEMATIKA+DAN+ILMU+PENGETAHUAN+ALAM&rft.date=2025-05-21&rft.type=Skripsi&rft.type=NonPeerReviewed&rft.format=text&rft.identifier=http%3A%2F%2Fdigilib.unila.ac.id%2F88902%2F3%2FABSTRAK.pdf&rft.format=text&rft.identifier=http%3A%2F%2Fdigilib.unila.ac.id%2F88902%2F1%2FSKRIPSI%2520FULL.pdf&rft.format=text&rft.identifier=http%3A%2F%2Fdigilib.unila.ac.id%2F88902%2F2%2FSKRIPSI%2520TANPA%2520BAB%2520PEMBAHASAN.pdf&rft.identifier=++AGHITA+NAMIRA+%2C+YULIZA+++(2025)+ANALISIS+WORD+PREDICTION+DENGAN+MENGGUNAKAN+LANGUAGE+MODEL+Bidirection+Encoding+Representations+from+Transfordmers+(BERT)+PADA+DATASET+KALIMAT+BAHASA+INDONESIA.++FAKULTAS+MATEMATIKA+DAN+ILMU+PENGETAHUAN+ALAM%2C+UNIVERSITAS+LAMPUNG.+++++&rft.relation=http%3A%2F%2Fdigilib.unila.ac.id%2F88902%2F