The fix for this problem may be extremely easy. What happened is that some disks of the raid failed. They were ejected. This happens. But the raid won't be assembled anymore if the failed disks are first on the mdadm assemble command line. Because for some reason, mdadm does not check what most disks say, but what the first disks say. So if you have a raid with 10 disks and the first two on the command line are failed, it will reject the remaining 8, because the are not compatible. All you need to do now is to list those two failed disks at the end with --force to activate the raid again:
mdadm /dev/md1 --assemble /dev/sdX,Y /dev/sd[a-f]
mdadm /dev/md1 --assemble /dev/sd[a-f] /dev/sdX,Y
Note that there's probably still a good reason for those disks to have been marked as failed...